All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi,   We are trying to get metrics into Splunk using TCP, so far we have tried the following:   inputs.conf [tcp://44444] connection_host = ip index = metrics_idx sourcetype = "json_no_timest... See more...
Hi,   We are trying to get metrics into Splunk using TCP, so far we have tried the following:   inputs.conf [tcp://44444] connection_host = ip index = metrics_idx sourcetype = "json_no_timestamp" or "_json" or "metrics_csv"   We can get this to work if we change sourcetype to statd and emulate the statd protocol, but we found this to be very limited.   We have 30 odd machines collecting "1000s" of data endpoints (mainly counters - was 5 things, now 12) - what would be the best way to get this into Splunk, without using JSON/CSV files...   Thanks !
So how would this look? You can only sort in an particular order of precedence i.e. 30days first then if they are equal, 90days, then if still equal 1 day, you know that right?
The "use existing data input" feature allows you to reuse the checkpoint for any other input or the same input where you want to start/resume from the last stored checkpoint.
Hi I'm wondering if it's possible to define and execute a macro from a lookup.  I have an index with several (about 50) user actions, which aren't named in a user friendly manner.  Additionally, eac... See more...
Hi I'm wondering if it's possible to define and execute a macro from a lookup.  I have an index with several (about 50) user actions, which aren't named in a user friendly manner.  Additionally, each action has different fields, which I'd like to extract using inline rex queries.  In short, I'd like a table with the following: Time UserName Message 10:00 a.m. JohnDoe This is action1.  Details for action1. 10:01 a.m. JohnDoe This is action2.  Details for action2. 10:02 a.m.  JohnDoe This is action3.  Details for action3.   I know can define a friendly name for the action using a lookup.  I can also do the rex field extractions and compose a details field using a macro for each action.  However, is there a way to also rex the fields and define the details in a lookup?   I was thinking of creating a lookup like this: Action FriendlyDescription MacroDefinition action1 "This is action1" | rex to extract fields for action1 | eval for Details for action1 action2 "This is action2" | rex to extract fields for action2 | eval for Details for action2 action3 "This is action3" | rex to extract fields for action3 | eval for Details for action3   I was thinking about something like this:   index=MyIndex source=MySource | lookup MyLookup.csv ActionId OUTPUT FriendlyDescription, MacroDefinition `code to execute MacroDefinition` |table _time, UserName, FriendlyDescription, Details for action   I'm not sure if i'm barking up the wrong tree, but the reason I'd like to do this in one place (a lookup) instead of 50 different macro definitions.  It'd be neat to have all the code in one place. Thanks!            
ok i think i have the formula worked out and now i can get my average and totals on one chart. i suspect, however, that you cant overlay a line chart on a stacked column chart as now that i am able t... See more...
ok i think i have the formula worked out and now i can get my average and totals on one chart. i suspect, however, that you cant overlay a line chart on a stacked column chart as now that i am able to add my average to my stacked column, it appears, but appears as one of the stacked items in each column and not as a line  as a line overlay (the large orange stacked counts are the average)
@sainag_splunk what does this error mean?  
@sainag_splunk  Thank you for the prompt response!
Sufficiently modern btool supports --app and --user switches letting you compare effective configs in different search-time contexts (caveat - doesn't consider permissions)
All "final" inputs that your LB balances traffic to need to have the same configuration so that they behave identically regardless of where your request is redirected - they have to have the same tok... See more...
All "final" inputs that your LB balances traffic to need to have the same configuration so that they behave identically regardless of where your request is redirected - they have to have the same tokens defined, should set the same default metadata fields should they not be set by the sender, have the same queue parameters.  
Seems like this issue is due to a transaction command that is not combining the events as intended. This then breaks the search when the other lines are added, however it does not app in another app ... See more...
Seems like this issue is due to a transaction command that is not combining the events as intended. This then breaks the search when the other lines are added, however it does not app in another app which leads me to believe the field extraction is not happening properly.
Q1: Tokens do NOT automatically rotate. Q2: Currently, you must use the API. There is not a way to do it in the GUI. Q3: You must be an admin to rotate a token. Help Rotating a Token 1) settings ... See more...
Q1: Tokens do NOT automatically rotate. Q2: Currently, you must use the API. There is not a way to do it in the GUI. Q3: You must be an admin to rotate a token. Help Rotating a Token 1) settings -> view profile -> show user api access token (This is your personal API token. You must be an admin to continue.) 2) copy your user api token 3) go to settings -> access tokens and identify the token that should be updated (such as Default)     - Make a note of what the current value of your secret is. 4) Run a curl command from your desktop computer:     - Note: 604800 seconds is 7 days. Using a graceful value allows the old and new secret to be valid for a short term so your secret update doesn't cause any disruption.     - Replace YOUR-REALM with your organization's realm, such as us0, us1, eu0, etc.     - Replace YOUR-TOKEN-NAME with the name of the token you want to rotate (such as Default)     - Replace YOUR-USER-API-TOKEN that you noted in step 2.   curl -X  POST "https://api.YOUR-REALM.signalfx.com/v2/token/YOUR-TOKEN-NAME/rotate?graceful=604800" -H "Content-type: application/json" -H "X-SF-TOKEN: YOUR-USER-API-TOKEN"   Now your token can keep the same name, the expiration date will be refreshed to a year from now, and the secret will be new. You must copy and paste the new secret in places where you're using this token (e.g., RUM instrumentation, or OTel collector splunk-otel-collector.conf file, or Windows env variables.) Be sure to restart any collectors where you update this token secret.
can you try adding this below line to the end of your search? and give it a try? | noop search_optimization.predicate_push=f https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReferen... See more...
can you try adding this below line to the end of your search? and give it a try? | noop search_optimization.predicate_push=f https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Noop#Managing_specific_search_optimizations_with_the_noop_command Hope this Helps. Karma would be appreciated. 
Check out this link; it might be helpful for you. https://community.splunk.com/t5/Getting-Data-In/Setting-up-HEC-HTTP-Event-Collector-in-a-indexer-cluster/m-p/560975/highlight/true#M92683
Hello @Cheng2Ready Are you on classic stack or Victoria stack? If you are on classic stack, we need to configure this inputs on the idm. 
Yes. During ingestion you overwrite the original sourcetype. Since then Splunk has no idea of the original sourcetype whatsoever. During search time it behaves the same as if you'd ingested it with t... See more...
Yes. During ingestion you overwrite the original sourcetype. Since then Splunk has no idea of the original sourcetype whatsoever. During search time it behaves the same as if you'd ingested it with the new sourcetypes from scratch. Splunk has no idea during search time what happens during index-time. It only sees indexed effects of the index-time operations.
Using Splunk Cloud  Add-on for Salesforce I had Configuration all setup and it gave me a green light saying Successfully add an account but when I move over to Input tab I get "request failed with... See more...
Using Splunk Cloud  Add-on for Salesforce I had Configuration all setup and it gave me a green light saying Successfully add an account but when I move over to Input tab I get "request failed with status code 500 splunk saleforce add on input" Error  on version 4.9 I ran a update on the app it is now on  Version 4.10.0   and now I get this on the Input Page    
Ok, got it. So if I'm understanding you correctly, configs similar to my example should work to split my syslog events based on the regex during index-time and then when Splunk goes back to process t... See more...
Ok, got it. So if I'm understanding you correctly, configs similar to my example should work to split my syslog events based on the regex during index-time and then when Splunk goes back to process the REPORT/EXTRACTs it should match fields to the new sourcetypes at search-time based on the already indexed sourcetypes from the TRANSFORMS, correct?
There are no miracles. I understand that when you add the "where" command you stop getting any results. That would mean that either delta_time is calculated differently and for some reason its valu... See more...
There are no miracles. I understand that when you add the "where" command you stop getting any results. That would mean that either delta_time is calculated differently and for some reason its values are never in the desired range (which is very very unlikely) or that delta_time field is not getting properly evaluated in the preceeding step (which is much more likely). The easiest way to check it would be to run it up to the where command (but without it) and check the contents of delta_time field. If it is not defined in the app-embeded search, check the values of the fields which the delta_time field is supposed to be based on. They might either be not/wrongly extracted or - a counterintuitively - they might be extracted "too good" and ending up being multivalued fields (or string fields - sometimes Splunk doesn't recognize numbers properly and you have to explicitly call tonumber() on them but that would be surprising if it happened in one case and not the other).
Use the "sort" command, Luke! https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Sort
Something like that. Explanation - Splunk works (except for all the maintenance stuff that happens behind the scenes) generally in two pipelines. One set of things happens during event's ingestion ... See more...
Something like that. Explanation - Splunk works (except for all the maintenance stuff that happens behind the scenes) generally in two pipelines. One set of things happens during event's ingestion - so called index-time operations. And after the event is indexed there are search-time operations which happen during searching from indexes and further processing. So during indexing you rewrite the sourcetype metadata field using TRANSFORMs. The event is getting indexed with the new sourcetype. Then when you search the event it is getting parsed according to the sourcetype-defined search-time extractions (REPORT and EXTRACT settings). And they are defined separately for each of "new" sourcetypes. This is actually a quite typical use case - split a "combined" sourcetype during indexing into separate ones and define different search-time configurations for those sourcetypes.