Thank you
I did a logistic regression with basically the exact same steps as the slides (I split it into 70/30 training/testing groups and then just copied the rest of the code). To determine the model parameters, I first ran the lm function (I just used it because I’m more familiar with it, but I think glm should work the same) on all the variables with citbi_outcome as the response variable to see which variables were significant or not. I removed the non-significant ones and used those for my recipe. To deal with the NA, I turned them into 1s for all the binary variables and 3 for GCS total (since the lowest score is 3). During the previous project with this data, I saw that the NA values were definitely more similar to the positive injury cases, which is why I chose to make them positive instead of 0 or the mean.
I didn’t use the AI tools you mentioned in class, but I asked chat gpt what function to use to increase the weight of the positive cases and swap the success class for the metrics (they were previously counting 0 as positive and 1 as negative). My first few models predicted basically everything to be negative, which I thought was because the dataset had an overwhelming proportion of negative cases. The accuracy of the models was very high but the recall was very low, so I asked chat gpt what function I could use to increase the weight of the positive samples (step_upsample from the themis package you mentioned). I played around with the ratio until settling on .05 which seemed to maximize the F Beta score.
How did it work?
Classes 'Chat', 'R6' <Chat>
Public:
add_turn: function (user, assistant, log_tokens = TRUE)
chat: function (..., echo = NULL)
chat_async: function (..., tool_mode = c("concurrent", "sequential"))
chat_structured: function (..., type, echo = "none", convert = TRUE)
chat_structured_async: function (..., type, echo = "none", convert = TRUE)
clone: function (deep = FALSE)
get_cost: function (include = c("all", "last"))
get_model: function ()
get_provider: function ()
get_system_prompt: function ()
get_tokens: function (include_system_prompt = deprecated())
get_tools: function ()
get_turns: function (include_system_prompt = FALSE)
initialize: function (provider, system_prompt = NULL, echo = "none")
last_turn: function (role = c("assistant", "user", "system"))
on_tool_request: function (callback)
on_tool_result: function (callback)
register_tool: function (tool)
register_tools: function (tools)
set_system_prompt: function (value)
set_tools: function (tools)
set_turns: function (value)
stream: function (..., stream = c("text", "content"))
stream_async: function (..., tool_mode = c("concurrent", "sequential"), stream = c("text",
Private:
.turns: list
callback_on_tool_request: CallbackManager, R6
callback_on_tool_result: CallbackManager, R6
chat_impl: function (...)
chat_impl_async: function (...)
complete_dangling_tool_requests: function ()
echo: output
has_system_prompt: function ()
provider: ellmer::ProviderAnthropic, ellmer::Provider, S7_object
submit_turns: function (...)
submit_turns_async: function (...)
tools: list
Paris.
Classes 'Chat', 'R6' <Chat>
Public:
add_turn: function (user, assistant, log_tokens = TRUE)
chat: function (..., echo = NULL)
chat_async: function (..., tool_mode = c("concurrent", "sequential"))
chat_structured: function (..., type, echo = "none", convert = TRUE)
chat_structured_async: function (..., type, echo = "none", convert = TRUE)
clone: function (deep = FALSE)
get_cost: function (include = c("all", "last"))
get_model: function ()
get_provider: function ()
get_system_prompt: function ()
get_tokens: function (include_system_prompt = deprecated())
get_tools: function ()
get_turns: function (include_system_prompt = FALSE)
initialize: function (provider, system_prompt = NULL, echo = "none")
last_turn: function (role = c("assistant", "user", "system"))
on_tool_request: function (callback)
on_tool_result: function (callback)
register_tool: function (tool)
register_tools: function (tools)
set_system_prompt: function (value)
set_tools: function (tools)
set_turns: function (value)
stream: function (..., stream = c("text", "content"))
stream_async: function (..., tool_mode = c("concurrent", "sequential"), stream = c("text",
Private:
.turns: list
callback_on_tool_request: CallbackManager, R6
callback_on_tool_result: CallbackManager, R6
chat_impl: function (...)
chat_impl_async: function (...)
complete_dangling_tool_requests: function ()
echo: output
has_system_prompt: function ()
provider: ellmer::ProviderAnthropic, ellmer::Provider, S7_object
submit_turns: function (...)
submit_turns_async: function (...)
tools: list
[[1]]
<Turn: user>
The capital of France is
[[2]]
<Turn: assistant>
Paris.
Where are DataBot’s logs?
Demo
Let’s say, two men are trying to drive a stake into the ground. One of them has a hammer, and the other says, ‘when I nod my head, hit it with a hammer,’
What do you do when you get a flat tire?
