All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

user field is already present in data, but it is giving the wrong info, I want to extract the user field from raw logs. Field name should be "user" The field name "user" which I extracted is not sh... See more...
user field is already present in data, but it is giving the wrong info, I want to extract the user field from raw logs. Field name should be "user" The field name "user" which I extracted is not showing up in Interested fields as the field name user is already auto extracted. May I know how to make my extraction work with the field name "user"
Hi Everyone,  Can some one please guide to fix an issue on the below error while installing addon using cli. Used below command to install add on. Error is parameters must be in the form '-parameter... See more...
Hi Everyone,  Can some one please guide to fix an issue on the below error while installing addon using cli. Used below command to install add on. Error is parameters must be in the form '-parameter value' ./splunk install app splunk-add-on-for-unix-and-linux_701.tar /home/  
Hi, in my index I have a couple time fields that are returned via a simple search _time = 1/20/2022 1:38:55.000 PM (the Splunk-generated time) body.timestamp = 2022-01-20T21:38:45.7774493Z (the tr... See more...
Hi, in my index I have a couple time fields that are returned via a simple search _time = 1/20/2022 1:38:55.000 PM (the Splunk-generated time) body.timestamp = 2022-01-20T21:38:45.7774493Z (the transaction time from our log) I am trying to format the time output with the convert function but can only get the first result to return. | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(_time) AS timestamp = 2022-01-20 21:38:55 | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(body.timestamp) AS timestamp2 = none Am I missing something for the second timestamp to be returned? Thanks!
Hi, In the past (Splunk Enterprise v 7.x.x) I used the below search to run a report every few min. There were so many results that due to limitations I had to run them 1 day spans. I needed to do th... See more...
Hi, In the past (Splunk Enterprise v 7.x.x) I used the below search to run a report every few min. There were so many results that due to limitations I had to run them 1 day spans. I needed to do this for 6 months of data so I automated the process with a repeating report... I would run this search to create the first entries which is necessary of the next step index="app" sourcetype="api" type=log* | eval time=_time | sort time desc | table time type version | outputlookup append=false My_file.csv Then I created a report , set it to run every 1 or 2 minutes with the below search. It basically looks at the earliest date in My_file.csv file, then adjust the earliest and latest times for the main search. index="app" sourcetype="api" type=log* [| inputlookup My_file.csv | stats min(time) as mi | eval latest=(mi-0.001) | eval earliest=(latest-86400) | return earliest latest] | eval time=_time | table time type version | sort time desc | outputlookup append=true My_file.csv It just runs the search with the timeframe in my Splunk time picker. It doesn't seem to take the earliest and latest from my 'return' command in the subsearch. If I try running the subsearch only, then I do get a result... | inputlookup My_file.csv | stats min(time) as mi | eval latest=(mi-0.001) | eval earliest=(latest-86400) | return earliest latest Give me the below results, so I don't get why the value isn't used in the top search earliest="1642374033.873" latest="1642719633.873"   It works though if I do a map, but that's not a viable solution due to the high volumes... | inputlookup My_file.csv | stats min(time) as mi | eval latest=(mi-0.001) | eval earliest=(latest-86400) | table earliest latest | map maxsearches=10 search="search index="app" sourcetype="api" type=log* earliest="$earliest$" latest="$latest$""   What's frustrating is that this used to work and now I need to do the same exercise and I can't use it again. Does anybody have an idea why it's not working? Have you experience similar issues? Thanks
We've installed and configured the Azure add-on, and while it works, the various inputs seem to hang once or twice a day. For us, this is most noticeable with the device and user inputs (sourcetypes ... See more...
We've installed and configured the Azure add-on, and while it works, the various inputs seem to hang once or twice a day. For us, this is most noticeable with the device and user inputs (sourcetypes azure:aad:device and :user). I've set the add-on to DEBUG level logging, but there's nothing especially obvious. Environment: This add-on is running on a heavy forwarder, that exists almost exclusively to run API-based add-ons like this. It's a relatively-untaxed RHEL 8 VM. We're using version 3.2.0, the now-current version of the add-on. (We had the same problem with 3.1.1 at least. I'm not sure how far back the problem goes, but it's been an intermittent issue for at least a few months.) First: the add-on has what to me looks like a bug in the interval setting. We've set the interval to "300" -- this is labeled as the number of seconds between queries, but the logs show the queries are running closer to every 300 milliseconds. If we set it lower than 300, the time between queries seems to shorten as you would expect, but setting it higher than 300 doesn't seem to work. We've tried setting it to values like 5000, to see if we could trick the add-on to pulling every 5 seconds, but that didn't do what we hoped.) More important, though, is that the input periodically hangs. The normal behavior looks like this: 2022-01-20 16:24:39,314 DEBUG pid=3938342 tid=MainThread file=connectionpool.py:_make_request:461 | https://graph.microsoft.com:443 "GET /v1.0/devices/?$skiptoken=(token 1) HTTP/1.1" 200 None 2022-01-20 16:24:39,476 DEBUG pid=3938342 tid=MainThread file=base_modinput.py:log_debug:288 | _Splunk_ AAD devices nextLink URL (@odata.nextLink): https://graph.microsoft.com/v1.0/devices/?$skiptoken=(token 2) 2022-01-20 16:24:39,477 DEBUG pid=3938342 tid=MainThread file=base_modinput.py:log_debug:288 | _Splunk_ Getting proxy server. 2022-01-20 16:24:39,477 INFO pid=3938342 tid=MainThread file=setup_util.py:log_info:117 | Proxy is not enabled! 2022-01-20 16:24:39,479 DEBUG pid=3938342 tid=MainThread file=connectionpool.py:_new_conn:975 | Starting new HTTPS connection (1): graph.microsoft.com:443 2022-01-20 16:24:39,741 DEBUG pid=3938342 tid=MainThread file=connectionpool.py:_make_request:461 | https://graph.microsoft.com:443 "GET /v1.0/devices/?$skiptoken=(token 2) HTTP/1.1" 200 None Basically, the add-on makes a request with a given token, part of the output of that is to get a new token, then (interval) milliseconds later, it uses that token and the cycle starts again. Eventually, though, the add-on gets to the fifth line in the above (where it's starting a new connection), and... that's it. The add-on doesn't do anything until one of the Splunk admins gets the alert we set up, that says "hey there haven't been any new events of sourcetype X in index Y for a couple hours, maybe you should take a look". Sometimes, the inputs will hang just a few hours after a restart; sometimes they work just fine for weeks at a time. Logging into the heavy forwarder, and toggling the input to "Disabled" and right back to "Enabled" clears the issue. Presumably disabling the input kills off the underlying Python script, then re-enabling it launches a fresh instance. We've thought about scripting a regular restart of this add-on, but there doesn't seem to be a way in the CLI to do so, short of restarting the whole heavy forwarder. That's a really big hammer for a relatively small nail, so it's not our first choice. And given that the add-on doesn't hang on any predictable schedule, we don't think it's worth the trade-off (Plan 5 or plan 6 would probably be building a new heavy forwarder for JUST the Azure add-on, so a scheduled restart of Splunk as a whole won't impact any other add-ons. But since building a new machine incurs costs to our team, and how it's still an inelegant solution, it's probably the last-resort plan.) Aside from setting the add-on to "DEBUG," is there anything else I can do within the add-on to debug this? Anyone had problems like this before, and if you have, how did you work around them? Is the "interval" thing really a bug, and if so to whom should I report it?
Hello, I would like to know if its possible to upgrade from Splunk 6.2.1 to 8.2.4 having an enterprise perpetual license? If not, what do I have to do? Thank you all in advance
Unable to see my host in index=_interospection /_internal  after runing the above query in the same host I can't see the hostname. Unable to see host ES   
Hi, In the following log entries, I wanted to extract uri in a specific format: log: a_level="INFO", a_time="null", a_type="type", a_msg="Method=GET,Uri=http://monolith-xxx.abc.com/v2/clients?ski... See more...
Hi, In the following log entries, I wanted to extract uri in a specific format: log: a_level="INFO", a_time="null", a_type="type", a_msg="Method=GET,Uri=http://monolith-xxx.abc.com/v2/clients?skip=0top=100,MediaType=null,XRemoteIP=null" log: a_level="INFO", a_time="null", a_type="type", a_msg="Method=GET,Uri=http://monolith-xxx.abc.com/v1/clients/234,MediaType=null,XRemoteIP=null" log: a_level="INFO", a_time="null", a_type="type", a_msg="Method=GET,Uri=http://monolith-xxx.abc.com/v1/users/123,MediaType=null,XRemoteIP=null" For uri, I wanted the full extract until "?" or ",". Also remove and guids and digits from URL except for "/v1/","/v2/" http://monolith-xxx.abc.com/v2/clients http://monolith-xxx.abc.com/v1/clients/ http://monolith-xxx.abc.com/v1/users/ My current splunk query is as below: index=aws_abc env=prd-01 uri Method StatusCode ResponseTimeMs | rex field=log "ResponseTimeMs=(?<ResponseTimeMs>\d+),StatusCode=(?<StatusCode>\d+)" | rex field=log "\"?Method\"?\=(?<Method>[^,]*)" | rex field=log "Uri=(?<uri>[^\,]+)" | rex field=uri mode=sed "s/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}|\d*//g" | table uri,Method,StatusCode,ResponseTimeMs I get value in the table for all 4 but uri in table shows as below http://monolith-xxx.abc.com/v/clients?isactive=true http://monolith-xxx.abc.com/v/users/?filter=(Name%startswith%'H') Expected Output: http://monolith-xxx.abc.com/v2/clients http://monolith-xxx.abc.com/v2/users/ Please help. Thanks
Hello, We have an oracle database schema with mutiple tables. All these tables have a column called idversion, and each has a view that shows only the version that was specified into the table sesio... See more...
Hello, We have an oracle database schema with mutiple tables. All these tables have a column called idversion, and each has a view that shows only the version that was specified into the table sesio_te. So the expected way to work with these tables is to INSERT INTO "XXXX"."SESIO_TE" (sessionvalue, usuario, version_actual) VALUES ('test version', 'splunk', 21)  and then SELECT * FROM "XXXX"."PRESU_IN_AUD_VI" WHERE ... What I want to achieve is a dashboard that shows some data in these tables on demand from the DB. Not indexed data. So if I have multiple users watching this dashboard and they are asking for different versions I need to update the version before quering the views. But if you try to do   INSERT INTO "XXXX"."SESIO_TE" (sessionvalue, usuario, version_actual) VALUES ('test version', 'splunk', 21); SELECT * FROM "XXXX"."PRESU_IN" WHERE ... ; in the same dbxquery command it fails saying java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not properly ended. Any ideas how to do this? Thank you in advance.  
I need help regarding comparise a ISO 8601 date field with a specific date. Below is a simple example: index=devices | table device_last_seen Results: device_last_seen 2022-01-21T13:09:58Z ... See more...
I need help regarding comparise a ISO 8601 date field with a specific date. Below is a simple example: index=devices | table device_last_seen Results: device_last_seen 2022-01-21T13:09:58Z 2022-01-21T13:10:06Z 2022-01-17T14:56:00Z 2022-01-16T10:57:18Z   My goal is to show only the devices reported in the last 24h. It should be like this: device_last_seen 2022-01-21T13:09:58Z 2022-01-21T13:10:06Z   However the search below didn´t return any results. index=devices | eval last24h=relative_time(now(), "-1d") | where device_last_seen > last24h | table device_last_seen Thank in advance for your help.
Hello, In which direction must firewall openings be configured for indexers and search heads to be able to communicate with the license manager and fetch license etc.? Is it only  <search_head/index... See more...
Hello, In which direction must firewall openings be configured for indexers and search heads to be able to communicate with the license manager and fetch license etc.? Is it only  <search_head/indexer> --> License manager (on port 8089) or both ways, so <search_head/indexer> --> <License_manager> AND  <License_manager> --> <search_head/indexer> (on port 8089). I have seen network topologies saying different things regarding this. Is one way enough or do we need both ways on port 8089? Thank you!
hi as you can see,  I display 3 differents panels (one map viz and 2 chart viz) in a same row I have modified the standard width of these panels in CSS Now, i woould like to add in the same row 2 ... See more...
hi as you can see,  I display 3 differents panels (one map viz and 2 chart viz) in a same row I have modified the standard width of these panels in CSS Now, i woould like to add in the same row 2 other chart viz and to expand the height of the row could you help me please?     <form> <label>XXX</label> <fieldset submitButton="false"> <input type="time" token="tokTime" searchWhenChanged="true"> <label>Select Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel depends="$alwaysHideCSS$"> <html> <style> #chart{ width:20% !important; } #chart2{ width:20% !important; } #map{ width:60% !important; } </style> </html> </panel> <panel id="map"> <title>XXX</title> <map> <search> <query></query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="mapping.map.center">(46,2)</option> <option name="mapping.map.zoom">5</option> <option name="mapping.type">marker</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </map> </panel> <panel id="chart"> <title>XXX</title> <chart> <search> <query></query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.rotation">-45</option> <option name="charting.axisTitleX.text">Bureaux</option> <option name="charting.axisTitleY.text">Nb utilisateurs</option> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"nbsam":#f70505}</option> <option name="charting.legend.placement">none</option> <option name="height">230</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel id="chart2"> <title>XXX</title> <chart> <search> <query></query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.rotation">-45</option> <option name="charting.axisTitleX.text">Bureaux</option> <option name="charting.axisTitleY.text">Nb utilisateurs</option> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"nbsam":#27B508}</option> <option name="charting.legend.placement">none</option> <option name="height">230</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form>        
I have a query that returns a set of hosts that have an event string. index=anIndex sourcetype=aSourceType ("aString1" AND ( host = "aHostName*")) |  stats values(host) AS aServerList1   I have a... See more...
I have a query that returns a set of hosts that have an event string. index=anIndex sourcetype=aSourceType ("aString1" AND ( host = "aHostName*")) |  stats values(host) AS aServerList1   I have a list of servers ("Server1", "Server2", "Server3")   <-  ServerList2   What im trying to do is to find servers/hosts that are not returned from the initial query. i.e. hosts that exists in ServerList2 but are not in ServerList1 ?
Hello, I have a tab with this field : <p>GET /url1/url2?code1=11&code2=12&code3=13 HTTP/1.1</p> I would like to split this field in 3 field code1, code2 and code3. I tried with this splunk comman... See more...
Hello, I have a tab with this field : <p>GET /url1/url2?code1=11&code2=12&code3=13 HTTP/1.1</p> I would like to split this field in 3 field code1, code2 and code3. I tried with this splunk command : <p>| rex field=message.jbosseap.access_log.http_request "codeRegate=(?<codeRegate>.*)""</p> but it is not good. How can I do this ? Thank you!
I have, sourcetype_A  (fields : ID, age, city, state) sourcetype_B  (fields : ID, job, salary, gender) The fields "ID" is common in both sourcetype_A and B but with a caveat. example1 : for ID ... See more...
I have, sourcetype_A  (fields : ID, age, city, state) sourcetype_B  (fields : ID, job, salary, gender) The fields "ID" is common in both sourcetype_A and B but with a caveat. example1 : for ID = 1687, it is present in sourcetype_A as 0001687 , in sourcetype_B as 1687 example2 : for ID = 9843, it is present in sourcetype_A as 009843 , in sourcetype_B as 9843 example3 : for ID = 8765, it is present in sourcetype_A as 08765 , in sourcetype_B as 8765 where 1687, 9843, 8765 are the actual IDs. zeros are creating mess in sourcetype_A . I am not allowed to use join, So this is what I am trying but I am not seeing all my data. =================================== (index=country) sourcetype=sourcetype_A OR sourcetype=sourcetype_B | eval ID = ltrim(ID,"0") | eventstats dc(sourcetype) as dc_st | where dc_st >1 | table ID, age, city, state,  job, salary, gender =================================== I also tried | stats values (age) as age                                  ........      ..........................................................                   by ID. But stats gave me massive multivalue fields with messy duplicates. I am asked to get in one row per data (no multivalues ) Any help ?
I want to build this type of dashboard  using internal data in splunk.but i couldn’t able interlink this structure using dashboard studio.please help in this thank you in advance veeru
Hi, In the following log, I wanted to extract Url, Method, ResponseTimeMs, StatusCode as a table: log: a_level="INFO", a_time="null", a_sub="xxx", a_uid="xx", a_tid="xx", a_rid="guid", a_thread="17... See more...
Hi, In the following log, I wanted to extract Url, Method, ResponseTimeMs, StatusCode as a table: log: a_level="INFO", a_time="null", a_sub="xxx", a_uid="xx", a_tid="xx", a_rid="guid", a_thread="175" a_type="type", a_met="Move", a_msg="Method=GET,Uri=http://monolith-xxx.abc.com/v2/clients?skip=0top=100,MediaType=null,RemoteIP=::ffff:10.10.10.10,XRemoteIP=null,ContentType=application/json,ContentLength=9702,ResponseTimeMs=54,StatusCode=200,ReasonPhrase=null,Referrer=null For URL, I wanted the full extract "http://monolith-xxx.abc-xyz/v2/clients?skip=0top=100"  My current splunk query is as below: index=aws_abc env=prd-01 uri Method StatusCode ResponseTimeMs | eval DataSet=log | rex field=DataSet "ResponseTimeMs=(?<ResponseTimeMs>\d+),StatusCode=(?<StatusCode>\d+)" | rex field=DataSet "Url=(?<uri>[^,]+),Method=(?<Method>\w+)" | table Url,Method,ResponseTimeMs, StatusCode I get value in the table for ResponseTimeMs, StatusCode but not for URL and Method. Please help. Thanks
L.s.,   At our company we deploy the Windows UF on all the vdi machine's. For security reason we make use of the security policy where powershell is prohibited to run. On a vdi machine with the co... See more...
L.s.,   At our company we deploy the Windows UF on all the vdi machine's. For security reason we make use of the security policy where powershell is prohibited to run. On a vdi machine with the command Get-ExecutionPolicy -List | Format-Table -AutoSize i get below: Scope ExecutionPolicy ----- --------------- MachinePolicy Undefined UserPolicy Undefined Process Undefined CurrentUser Undefined LocalMachine Restricted The last part is where the problem is.. it restricts the local system account from executing the scripts in the universal forwarder bin folder.  It gives an error for the splunk-powershell.ps1 script and then consumes a lot of cpu.   What can we do to use the scripts and use the secutiry policy? The policy is a demand in our company, so we can't dump that one.   Thanx in advance..
I have a server where logs are generated on daily basis in this format- /ABC/DEF/XYZ/xyz17012022.zip      /ABC/DEF/XYZ/xyz16012022.zip            /ABC/DEF/XYZ/xyz15012022.zip OR  /ABC/DEF/RST/rst1... See more...
I have a server where logs are generated on daily basis in this format- /ABC/DEF/XYZ/xyz17012022.zip      /ABC/DEF/XYZ/xyz16012022.zip            /ABC/DEF/XYZ/xyz15012022.zip OR  /ABC/DEF/RST/rst17012022.gz      /ABC/DEF/RST/rst16012022.gz               /ABC/DEF/RST/rst15012022.gz   I am getting this error , every time when i am indexing the .gz, .tar or .zip  file - "updated less than 10000ms ago, will not read it until it stops changing ; has stopped changing , will read it now." This problem was earlier addressed in this post,  https://community.splunk.com/t5/Developing-for-Splunk-Enterprise/gz-file-not-getting-indexed-in-splu... As suggested I have used " crcSalt = <SOURCE> " but I am still facing similar errors.   inputs.conf:  [monitor:///ABC/DEF/XYZ/xyz*.zip] index= log_critical disabled = false sourcetype= Critical_XYZ ignoreOlderThan = 2d crcSalt = <SOURCE> I am getting this Event in Internal Logs while ingesting the log file    
Hi, We install Splunk_TA_nix and enabled both cpu.sh and cpu_metrics.sh to capture cpu related logs. Do we have SPL query we can use to calculate the CPU Utilization. I do not have indepth Linux bac... See more...
Hi, We install Splunk_TA_nix and enabled both cpu.sh and cpu_metrics.sh to capture cpu related logs. Do we have SPL query we can use to calculate the CPU Utilization. I do not have indepth Linux background so I am not sure which fields should be use to calculate the percentage of  CPU Utilization. If you can share the formula or fields I need to use from Splunk_TA_nix , I would appreciate it. Our aim is to check the historical  CPU Utilization of our Splunk Heavy Forwarder. Thanks