All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Configured the otelcol-contrib  agent.config.yaml file to send the data to splunk cloud. i'm getting the data but the source is coming as HEC Token name.  receivers is configured to read different f... See more...
Configured the otelcol-contrib  agent.config.yaml file to send the data to splunk cloud. i'm getting the data but the source is coming as HEC Token name.  receivers is configured to read different files.  filelog/sys: include: [ /var/log/messages, /var/log/auth, /var/log/mesg$$, /var/log/cron$$, /var/log/acpid$$ ] start_at: beginning include_file_path: true include_file_name: false   Exporters:  in the exporters didn't mention the source. option 1 : By default Splunk is taking the HEC token name as it's value. option 2 : I can give the value as log file path but for multiple files, it's not working .    in splunk cloud, source is - HEC Token value                       log_file_path field is giving the log file path.  Is there a way i can configure the source to take the log file path.  
One of my customers is using a tool with a rest API available via SAP ALM Analytics API Ref. https://api.sap.com/api/CALM_ANALYTICS/overview They are looking to get data from the API into a Splu... See more...
One of my customers is using a tool with a rest API available via SAP ALM Analytics API Ref. https://api.sap.com/api/CALM_ANALYTICS/overview They are looking to get data from the API into a Splunk Index, so we suggest having an intermediary application (like a Scheduled Function) to get data from SAP and send it to Splunk using an HEC Token. Is it possible to use something at Splunk directly to pull the data from 3rd party? Or is the suggested approach a good idea to go?  
Splunk should be the owner of everything under $SPLUNK_HOME.  Anything else is asking for trouble.  Splunk's inability to access or create a file/directory could manifest itself in a number of unexpe... See more...
Splunk should be the owner of everything under $SPLUNK_HOME.  Anything else is asking for trouble.  Splunk's inability to access or create a file/directory could manifest itself in a number of unexpected ways.
Manually sending a span using something like curl can often be trickier than instrumenting a sample application. But, that said, it is possible. I see you're using the zipkin compatible endpoint. ... See more...
Manually sending a span using something like curl can often be trickier than instrumenting a sample application. But, that said, it is possible. I see you're using the zipkin compatible endpoint. This page may help if you want to try other formats/endpoints: https://docs.splunk.com/observability/en/apm/apm-spans-traces/span-formats.html Did you replace <realm> with your realm? e.g. "us1" or "us0", etc With a single span, you won't see anything on the service map, but you should see something on the APM home screen. Here is a sample span that worked for me: curl -X POST https://ingest.us1.signalfx.com/v2/trace/signalfxv1 \ -H 'Content-Type: application/json' \ -H 'X-SF-TOKEN: XXXXXXXXXXXXXXXXXX' \ -d '[{"traceId": "a03ee8fff1dcd9b9","id": "2e8cfb154b59a41f","kind": "SERVER","name": "post /location/update/v4","timestamp": 1716495917000,"duration": 131848,"localEndpoint": {"serviceName": "routing"},"tags": {"ecosystem": "prod","habitat": "uswest1aprod","http.uri.client": "/location/update/v4","region": "uswest1-prod","response_status_code": "200"},"shared": true}]'    
I am calling the trace endpoint https://ingest.<realm>.signalfx.com/v2/trace/signalfxv1 and sending this span in the body: [ { "id": "003cfb6642471ba4", "traceId": "0025ecb5dc31498b931bce60be07... See more...
I am calling the trace endpoint https://ingest.<realm>.signalfx.com/v2/trace/signalfxv1 and sending this span in the body: [ { "id": "003cfb6642471ba4", "traceId": "0025ecb5dc31498b931bce60be0784cd", "name": "reloadoptionsmanager", "timestamp": 1716477080494000, "kind": "SERVER", "remoteEndpoint": { "serviceName": "XXXXXXX" }, "Tags": { "data": "LogData", "eventId": "0", "severity": "8" } } ] The request receives a 200 response. The response body is OK. However the span does not appear in APM. timestamp is the the number of microseconds since 1/1/1970.  
We have a contractor installing a Splunk instance for us.  For search heads, we have an NVMe volume mounted for the /opt/splunk/var/run folder.  The ./run folder is owned by 'root' and the 'splunk' u... See more...
We have a contractor installing a Splunk instance for us.  For search heads, we have an NVMe volume mounted for the /opt/splunk/var/run folder.  The ./run folder is owned by 'root' and the 'splunk' user cannot write into the folder. Similar, our indexers have a mounted NVMe volume for the /opt/splunk/var/lib folder and it too is owned by 'root'.  Index folders and files are located one level below that in the ./lib/splunk folder where the 'splunk' user is the owner. What are the consequences of having 'root' own these folders on the operation of Splunk?  I assumed that when Splunk is running as non-root, that it must be the owner of all folders and files from /opt/splunk on down.  Am I wrong?
So, I have a loadjob with all the data I need with a primary field (account number). But, I have a CSV with about 104K account number that they only want in this report. How do I filter only 104K acc... See more...
So, I have a loadjob with all the data I need with a primary field (account number). But, I have a CSV with about 104K account number that they only want in this report. How do I filter only 104K account numbers in this load job?  I don't have access to admin to change the join limit... Can Lookups do the job? I also don't want the values to grouped together in each row... I just want to remove the account numbers that are not on the csv from the loadjob...  
Hi @Roberto.Barnes, Interesting, I would think it would be available for SaaS. Let me know what Support says. 
Hi Ryan, It seems since we are a SaaS customer the Platforms tab is not enabled. I don't know if that means that SaaS customer can't host a Geo Server. I opened a case. Thanks, Roberto
Hi @Roberto.Barnes, Once you are in the downloads screen, you will need to click on the 'Platforms' tab. From there, in the "Types" dropdown, you can find the EUM GEO option.
Beautiful. Many thanks, @ITWhisperer !
If the _count is zero, there are no events, so the addinfo and rename is setting a value for the _time field for the event being added by append pipe to be the start time of the search period. This i... See more...
If the _count is zero, there are no events, so the addinfo and rename is setting a value for the _time field for the event being added by append pipe to be the start time of the search period. This is so that the subsequent timechart command has at least one event with a _time value in the search period. With that one event, the timechart command will then fill in the missing time slots.
You need to set up a link drilldown on this field, using the value from the click as the address to link to. Try something like this <drilldown> <condition field="Link"> ... See more...
You need to set up a link drilldown on this field, using the value from the click as the address to link to. Try something like this <drilldown> <condition field="Link"> <link target="_blank">https://$click.value2|n$</link> </condition> </drilldown>
Many thanks, @ITWhisperer . That did the trick beautifully. The only bit I didn't quite understand in the query was the renaming the info_min_time as _time, I'll accept the comment as solution, bu... See more...
Many thanks, @ITWhisperer . That did the trick beautifully. The only bit I didn't quite understand in the query was the renaming the info_min_time as _time, I'll accept the comment as solution, but if you'd be so kind to shed some light on what that line is doing it would be wonderful. Thanks again, Victor
You need at least 1 result during the period to get the results filled in. If there are no results, you could try something like this | appendpipe [| stats count as _count | where _count=0 ... See more...
You need at least 1 result during the period to get the results filled in. If there are no results, you could try something like this | appendpipe [| stats count as _count | where _count=0 | addinfo | rename info_min_time as _time | eval count=0 | fields _time count] | timechart sum(count) as count | eval count=coalesce(count,0)
Try using the json_extract_exact function (which doesn't use paths and therefore avoids the issue of keys looking like paths. | spath context | eval cookiesSize=json_extract_exact(context, "env.cook... See more...
Try using the json_extract_exact function (which doesn't use paths and therefore avoids the issue of keys looking like paths. | spath context | eval cookiesSize=json_extract_exact(context, "env.cookiesSize")
It happens for all of them.   The strange part is, when i first click, you can see the notable name in the URL after "/incident_review?form.rule_name=(rule name)" followed by earliest/latest timest... See more...
It happens for all of them.   The strange part is, when i first click, you can see the notable name in the URL after "/incident_review?form.rule_name=(rule name)" followed by earliest/latest timestamos but after a moment it disappears and is replaced with a new URL which only has the earliest/latest values 
If you want Splunk to extract fields for you then you must use a "standard" format. By default, Splunk will extract fields from events in key=value format.  Other formats, like CSV, JSON, XML, etc. ... See more...
If you want Splunk to extract fields for you then you must use a "standard" format. By default, Splunk will extract fields from events in key=value format.  Other formats, like CSV, JSON, XML, etc. must be specified in props.conf.  JSON and XML events must be well-formed or Splunk will not extract anything from them.
Look like this is worked with this query. Thank you so much for the quick and useful response.
Hi All, In the table i have URL .So if i am clicking on the URL it will redirect existing dashboard. Now i want to show only click here .If i click then existing dashboard is to be open. Instead URL... See more...
Hi All, In the table i have URL .So if i am clicking on the URL it will redirect existing dashboard. Now i want to show only click here .If i click then existing dashboard is to be open. Instead URL value i need click here and to open the URL This is my table  Name Interface Link  DSR1 DSR1 www.google.com  DSR2 DSR2 www.splunk.com