All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

While upgrading from 5.0 to 7.3.0 facing this issue while setting up the account we are facing this error! Can someone help how to fix this issue?
SPL is a bit wonky but got results in the final format you were looking for, I'm curious how this SPL will perform against your live data.       | makeresults | eval _raw="{ \"a.com\": [ { \"... See more...
SPL is a bit wonky but got results in the final format you were looking for, I'm curious how this SPL will perform against your live data.       | makeresults | eval _raw="{ \"a.com\": [ { \"yahoo.com\":\"10ms\",\"trans-id\": \"x1\"}, { \"google.com\":\"20ms\",\"trans-id\": \"x2\"} ], \"b.com\": [ { \"aspera.com\":\"30ms\",\"trans-id\": \"x3\"}, { \"arista.com\":\"40ms\",\"trans-id\": \"x4\"} ], \"trans-id\":\"m1\", \"duration\":\"33ms\" }" ``` start parsing json object ``` | fromjson _raw | foreach *.* [ | eval url_json=mvappend( url_json, case( mvcount('<<FIELD>>')==1, if(isnotnull(json('<<FIELD>>')), json_set('<<FIELD>>', "url", "<<FIELD>>"), null()), mvcount('<<FIELD>>')>1, mvmap('<<FIELD>>', if(isnotnull(json('<<FIELD>>')), json_set('<<FIELD>>', "url", "<<FIELD>>"), null())) ) ) ] | fields + _time, url_json, "trans-id", duration | rename "trans-id" as "top_trans-id" | fields - _raw | mvexpand url_json | fromjson url_json | fields - url_json | foreach *.* [ | eval sub_url=if( isnotnull('<<FIELD>>') AND isnull(sub_url), "<<FIELD>>", 'sub_url' ), sub_duration=if( isnotnull('<<FIELD>>') AND isnull(sub_duration), '<<FIELD>>', 'sub_duration' ) ] | rename "trans-id" as "sub_trans-id" | fields + _time, "top_trans-id", url, duration, sub_duration, sub_url, sub_trans-id | rename "top_trans-id" as "trans-id"      Final output:   There are some pretty big assumptions here, biggest being that the keys of the _raw json will have fields with the "*.*" format or a dot in the fieldname (domain names)
You should be able to build a report around the REST command | rest splunk_server=local /servicesNS/-/-/alerts/fired_alerts
I solved it by using the max_match option in the rex command. The x-forwarded-fors were extracted into a multivalue field x_forwarded_single | rex field=_raw "^(?P<timestamp>\w+\s\d+\/\d+\/\d+\s.\s... See more...
I solved it by using the max_match option in the rex command. The x-forwarded-fors were extracted into a multivalue field x_forwarded_single | rex field=_raw "^(?P<timestamp>\w+\s\d+\/\d+\/\d+\s.\s\d+:\d+:\d+\.\d+\s\w+\s\w+)\s(?P<remote_hostname>\S+)\s\((?P<x_forwarded_for>[^\)]*)\)\s\>\s(?P<local_ip_address>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):(?P<local_port>[\d\-]+)\s\"(?<request>[^\"]+)\"\s(?<request_body_length>\S+)\s(?<time_milli>\S+)\s(?<http_status>\S+)\s(?<bytes_sent>\S+)\s(?<request_thread_name>\S+)\s\"(?<referer>[^\"\s]*)\"\s\"(?<user_agent>[^\"]*)\"\s(?<remote_user>\S+)\s(?<user_session_id>\S+)\s(?<username>\S+)\s(?<session_tracker>\S+)" | rex field=request "(?<http_method>\w*)\s+(?<url>[^ ]*)\s+(?<http_version>[^\"]+)[^ \n]*" | rex field=url "(?<uri_path>[^?]+)(?:(?<uri_query>\?.*))?" | rex field=x_forwarded_for max_match=3 "(?<x_forwarded_single>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
So there's a couple ways you could approach this: 1. Less elegant solution - you can add in a decision/filter block to the existing playbook that runs every time to check for the tag. If the tag mat... See more...
So there's a couple ways you could approach this: 1. Less elegant solution - you can add in a decision/filter block to the existing playbook that runs every time to check for the tag. If the tag matches continue, otherwise, end the playbook. 2. If you truly don't want the playbook to run at all, I think you'll need to make a parent playbook with a decision or filter, and call the subplaybook from there. The parent playbook will still run on all events with the label, but the playbook you want to run wouldn't run at all. I added on the screenshot for the datapath you'd need to match the container tags.
Hi @sekhar463 , which user are you using to run Splunk, has this user the grants to read this file? please check that the path of the file is correct, runing the dir command in a cmd window. Ciao.... See more...
Hi @sekhar463 , which user are you using to run Splunk, has this user the grants to read this file? please check that the path of the file is correct, runing the dir command in a cmd window. Ciao. Giuseppe
still not coming  the file is text file as below and its under Program Files\Crestron\CCS400\User\Logs\ and want to ingest the file CCSFirmwareUpdate.txt  
Hi There, I use a Splunk Cloud instance with Universal Forwarders installed on each server. From here I have edited the inputs.conf file to enable the [perfmon://CPU] stanza. I am wondering if ther... See more...
Hi There, I use a Splunk Cloud instance with Universal Forwarders installed on each server. From here I have edited the inputs.conf file to enable the [perfmon://CPU] stanza. I am wondering if there are any out-of-the-box dashboards or recommended searches for putting this monitoring to use. All information I have been able to find online is in regards to an EOL add-on (Splunk App for Infrastructure) or Splunk On-Premise instances (This is a problem I have faced since beginning work on Splunk, huge lack of documentation for Splunk Cloud vs On-Prem) Thank you for any help in advance, Jamie
Hi @AL3Z , as I said, you have to set 644 for all conf files and 755 for folders. As also @PickleRick said, do this ation on a Linux server. Ciao. Giuseppe
H @_pravin, the only way to have the location of a connection is mapping the clientip field wit a location. You should have a map of you internal vlans and their location, so, you could put the vla... See more...
H @_pravin, the only way to have the location of a connection is mapping the clientip field wit a location. You should have a map of you internal vlans and their location, so, you could put the vlans and their location in a lookup and use it to map the clientip of the connection. Ciao. Giuseppe
Hi All,   I am trying to get login data about the the number of users logged in to the Splunk instance every day. I got login data using _internal logs as well audit logs about the number of users ... See more...
Hi All,   I am trying to get login data about the the number of users logged in to the Splunk instance every day. I got login data using _internal logs as well audit logs about the number of users logged in to the instance. Is it posssible to get the location of the person where he is logged in from ?    index="_internal" source=*access.log user!="-" /saml/acs | timechart span=1d count by user index=_audit login action="login attempt" | table _time user action info reason | timechart span=1d count by user     We have SAML authentication setup and not normal authentication and since we have office all over the world, so getting the location might help identify where the users are logging in as well. Thanks in advance.   Pravin
Does it always fail i.e. with different time ranges selected or just some of them?
From here, "Total Pallet" panel is not giving any results. Can you please help me to identify error and suggestion to fix the error ? ================================================================... See more...
From here, "Total Pallet" panel is not giving any results. Can you please help me to identify error and suggestion to fix the error ? ======================================================================= <form version="1.1" theme="light"> <label>Throughput : Highbay</label> <fieldset submitButton="false"></fieldset> <row> <panel> <input type="time" token="time" id="my_date_range" searchWhenChanged="true"> <label>Select the Time Range</label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default> <change> <eval token="time.earliest_epoch">if('earliest'="",0,if(isnum(strptime('earliest', "%s")),'earliest',relative_time(now(),'earliest')))</eval> <eval token="time.latest_epoch">if(isnum(strptime('latest', "%s")),'latest',relative_time(now(),'latest'))</eval> <eval token="macro_token">if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 2592000, "throughput_macro_summary_1d",if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 86400, "throughput_macro_summary_1h","throughput_macro_raw"))</eval> <eval token="form.span_token">if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 2592000, "d", if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 86400, "h", $form.span_token$))</eval> </change> </input> </panel> </row> <row> <panel> <chart> <title>Total Pallet</title> <search> <query>|`$macro_token$(span_token="$span_token$")` |strcat "raw" "," location group_name | timechart span=1d count by location</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form>
@ITWhisperer There is no need to raise a case; the macro is now working. I have removed the "|" from the macro used before the data model, and after that, it works fine.   Let me check other st... See more...
@ITWhisperer There is no need to raise a case; the macro is now working. I have removed the "|" from the macro used before the data model, and after that, it works fine.   Let me check other stuffs, If needed I will post my queries here.
I didn't use datamodel, I was just testing using a token inside a macro
@ITWhisperer Sure, I will. In the earlier chat, you said that you had used the same approach in your dashboard and it worked fine. Can you share with me that link for the reference?
The error message says it all - it looks like you can't use datamodel from within a macro. You could argue that this is a bug in the parser - please raise a support ticket with Splunk.
You might have a bit differently prepared macros/searches you use there.  
@PickleRick Nice explanation . But my approach is working fine on other dashboards. 
Don't use windows for manipulating unix-related files/archives. It's the same problem as with managing apps with deployment server run on Windows box - windows doesn't handle unix file permissions pr... See more...
Don't use windows for manipulating unix-related files/archives. It's the same problem as with managing apps with deployment server run on Windows box - windows doesn't handle unix file permissions properly and even if you run WSL-based ubuntu to access your windows filesystem, it won't work properly since windows file permissions don't "match" unix ones. Just copy your tar archive to the _inside_ of your WSL instance and untar it there.