All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

All "final" inputs that your LB balances traffic to need to have the same configuration so that they behave identically regardless of where your request is redirected - they have to have the same tok... See more...
All "final" inputs that your LB balances traffic to need to have the same configuration so that they behave identically regardless of where your request is redirected - they have to have the same tokens defined, should set the same default metadata fields should they not be set by the sender, have the same queue parameters.  
Seems like this issue is due to a transaction command that is not combining the events as intended. This then breaks the search when the other lines are added, however it does not app in another app ... See more...
Seems like this issue is due to a transaction command that is not combining the events as intended. This then breaks the search when the other lines are added, however it does not app in another app which leads me to believe the field extraction is not happening properly.
Q1: Tokens do NOT automatically rotate. Q2: Currently, you must use the API. There is not a way to do it in the GUI. Q3: You must be an admin to rotate a token. Help Rotating a Token 1) settings ... See more...
Q1: Tokens do NOT automatically rotate. Q2: Currently, you must use the API. There is not a way to do it in the GUI. Q3: You must be an admin to rotate a token. Help Rotating a Token 1) settings -> view profile -> show user api access token (This is your personal API token. You must be an admin to continue.) 2) copy your user api token 3) go to settings -> access tokens and identify the token that should be updated (such as Default)     - Make a note of what the current value of your secret is. 4) Run a curl command from your desktop computer:     - Note: 604800 seconds is 7 days. Using a graceful value allows the old and new secret to be valid for a short term so your secret update doesn't cause any disruption.     - Replace YOUR-REALM with your organization's realm, such as us0, us1, eu0, etc.     - Replace YOUR-TOKEN-NAME with the name of the token you want to rotate (such as Default)     - Replace YOUR-USER-API-TOKEN that you noted in step 2.   curl -X  POST "https://api.YOUR-REALM.signalfx.com/v2/token/YOUR-TOKEN-NAME/rotate?graceful=604800" -H "Content-type: application/json" -H "X-SF-TOKEN: YOUR-USER-API-TOKEN"   Now your token can keep the same name, the expiration date will be refreshed to a year from now, and the secret will be new. You must copy and paste the new secret in places where you're using this token (e.g., RUM instrumentation, or OTel collector splunk-otel-collector.conf file, or Windows env variables.) Be sure to restart any collectors where you update this token secret.
can you try adding this below line to the end of your search? and give it a try? | noop search_optimization.predicate_push=f https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReferen... See more...
can you try adding this below line to the end of your search? and give it a try? | noop search_optimization.predicate_push=f https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Noop#Managing_specific_search_optimizations_with_the_noop_command Hope this Helps. Karma would be appreciated. 
Check out this link; it might be helpful for you. https://community.splunk.com/t5/Getting-Data-In/Setting-up-HEC-HTTP-Event-Collector-in-a-indexer-cluster/m-p/560975/highlight/true#M92683
Hello @Cheng2Ready Are you on classic stack or Victoria stack? If you are on classic stack, we need to configure this inputs on the idm. 
Yes. During ingestion you overwrite the original sourcetype. Since then Splunk has no idea of the original sourcetype whatsoever. During search time it behaves the same as if you'd ingested it with t... See more...
Yes. During ingestion you overwrite the original sourcetype. Since then Splunk has no idea of the original sourcetype whatsoever. During search time it behaves the same as if you'd ingested it with the new sourcetypes from scratch. Splunk has no idea during search time what happens during index-time. It only sees indexed effects of the index-time operations.
Using Splunk Cloud  Add-on for Salesforce I had Configuration all setup and it gave me a green light saying Successfully add an account but when I move over to Input tab I get "request failed with... See more...
Using Splunk Cloud  Add-on for Salesforce I had Configuration all setup and it gave me a green light saying Successfully add an account but when I move over to Input tab I get "request failed with status code 500 splunk saleforce add on input" Error  on version 4.9 I ran a update on the app it is now on  Version 4.10.0   and now I get this on the Input Page    
Ok, got it. So if I'm understanding you correctly, configs similar to my example should work to split my syslog events based on the regex during index-time and then when Splunk goes back to process t... See more...
Ok, got it. So if I'm understanding you correctly, configs similar to my example should work to split my syslog events based on the regex during index-time and then when Splunk goes back to process the REPORT/EXTRACTs it should match fields to the new sourcetypes at search-time based on the already indexed sourcetypes from the TRANSFORMS, correct?
There are no miracles. I understand that when you add the "where" command you stop getting any results. That would mean that either delta_time is calculated differently and for some reason its valu... See more...
There are no miracles. I understand that when you add the "where" command you stop getting any results. That would mean that either delta_time is calculated differently and for some reason its values are never in the desired range (which is very very unlikely) or that delta_time field is not getting properly evaluated in the preceeding step (which is much more likely). The easiest way to check it would be to run it up to the where command (but without it) and check the contents of delta_time field. If it is not defined in the app-embeded search, check the values of the fields which the delta_time field is supposed to be based on. They might either be not/wrongly extracted or - a counterintuitively - they might be extracted "too good" and ending up being multivalued fields (or string fields - sometimes Splunk doesn't recognize numbers properly and you have to explicitly call tonumber() on them but that would be surprising if it happened in one case and not the other).
Use the "sort" command, Luke! https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Sort
Something like that. Explanation - Splunk works (except for all the maintenance stuff that happens behind the scenes) generally in two pipelines. One set of things happens during event's ingestion ... See more...
Something like that. Explanation - Splunk works (except for all the maintenance stuff that happens behind the scenes) generally in two pipelines. One set of things happens during event's ingestion - so called index-time operations. And after the event is indexed there are search-time operations which happen during searching from indexes and further processing. So during indexing you rewrite the sourcetype metadata field using TRANSFORMs. The event is getting indexed with the new sourcetype. Then when you search the event it is getting parsed according to the sourcetype-defined search-time extractions (REPORT and EXTRACT settings). And they are defined separately for each of "new" sourcetypes. This is actually a quite typical use case - split a "combined" sourcetype during indexing into separate ones and define different search-time configurations for those sourcetypes.
I ran into this issue today and came upon this post. The error I was getting was "Non-Displayable Column Type BINARY" for several columns in my select query. As it turns out, you can modify your sele... See more...
I ran into this issue today and came upon this post. The error I was getting was "Non-Displayable Column Type BINARY" for several columns in my select query. As it turns out, you can modify your select query to cast columns as different types in MySQL, which I did and it solved my issue   SELECT      CAST(field_name AS CHAR) AS field_name, FROM      table_name I hope this is helpful to anyone else who encounters the same issue.
Hi @Nawaz Ali.Mohammad  Would you please confirm if there is still no public API to fetch the JVM details ? Also, is there a public API to get the application language, e.g. Java, NodeJS, PHP ? If... See more...
Hi @Nawaz Ali.Mohammad  Would you please confirm if there is still no public API to fetch the JVM details ? Also, is there a public API to get the application language, e.g. Java, NodeJS, PHP ? If not, is it possible to use custom attributes to get such details? Or can we access/execute a query through an API call?
Thanks a lot. this really helps to solve the current problem. But is there a generic way to solve this? As we would have to write this condition for all the API's which would have this pattern or fal... See more...
Thanks a lot. this really helps to solve the current problem. But is there a generic way to solve this? As we would have to write this condition for all the API's which would have this pattern or fall into this type of API pattern.  e.g. | eval url=if(mvindex(split(url,"/"),1)="getFile","/getFile",url) | eval url=if(mvindex(split(url,"/"),1)="import","/import",url)  
This is a followup question to the solution on this thread: https://community.splunk.com/t5/Getting-Data-In/create-multiple-sourcetypes-from-single-syslog-source/m-p/701337/highlight/false#M116063 ... See more...
This is a followup question to the solution on this thread: https://community.splunk.com/t5/Getting-Data-In/create-multiple-sourcetypes-from-single-syslog-source/m-p/701337/highlight/false#M116063 I'm trying to do exactly what the original question asked but I need to apply different DELIM/FIELDS values to the different sourcetypes I create this way. The solution says that once the new sourcetype is created "...just use additional transforms entries with regular expressions that fit the specific subset of data..." does this mean that if I want to further extract fields from the new sourcetype I can only do that using TRANSFORMS from that point forward or would I be able to put a new stanza further down in the props.conf for [my_new_st] and use additional REPORTs or EXTRACTs that only apply to that new sourcetype? For example, can I do something like the following?: Description: first split the individual events based on the value regex-matched on the 5th field then do different field extracts for each of the new sourcetypes.      props.conf: [syslog] TRANSFORMS-create_sourcetype1 = create_sourcetype1 TRANSFORMS-create_sourcetype2 = create_sourcetype2 [sourcetype1] REPORT-extract = custom_delim_sourcetype1 [sourcetype2] REPORT-extract = custom_delim_sourcetype2           transforms.conf: [create_sourcetype1] REGEX = ^(?:[^ \n]* ){5}(my_log_name_1:)\s DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::sourcetype1 [create_sourcetype2] REGEX = ^(?:[^ \n]* ){5}(my_log_name_2:)\s DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::sourcetype2 [custom_delim_sourcetype1] DELIMS = " " FIELDS = d_month,d_date,d_time,d_source,d_logname,d_info,cs_url,cs_bytes,cs_port [custom_delim_sourcetype2] DELIMS = " " FIELDS = d_month,d_date,d_time,d_source,d_logname,d_info,cs_username,sc_http_status      
will do, thanks
@StephenD1  Start a new thread, this is a better practice than trying to resurrect a post that has already been answered. You can then reference this previous response.
HaltedCyclesPerDayMA is computed in the eval line above as shown the query gives me a stacked column chart (stacked by cycle), i want the HaltedCyclesPerDayMA as a line overlay (showing the moving a... See more...
HaltedCyclesPerDayMA is computed in the eval line above as shown the query gives me a stacked column chart (stacked by cycle), i want the HaltedCyclesPerDayMA as a line overlay (showing the moving average on top of the raw data
We have started to notice that since our recent upgrade to Splunk Cloud 9.2 we have been facing authentication issues where our add-ons would stop working due to authentication like "No AWS account n... See more...
We have started to notice that since our recent upgrade to Splunk Cloud 9.2 we have been facing authentication issues where our add-ons would stop working due to authentication like "No AWS account named" or "Unable to obtain access token" and "Invalid client secret provided" for our AWS and Azure add-ons basically anything requiring Splunk to decrypt credentials stored in passwords.conf has anyone else had this problem? We're currently engaged with Splunk Support to find a root cause for this problem. I would love to know if anyone else had faced this same problem since upgrading to 9.2?