All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a transaction command which correlates two log entries. If I pipe this result into a timechart command, which log entry's timestamp does it use to bucketize the results (the first or the secon... See more...
I have a transaction command which correlates two log entries. If I pipe this result into a timechart command, which log entry's timestamp does it use to bucketize the results (the first or the second)? Also, is there a way to specify this? Thanks! Jonathan
I have a Splunk query that does a lot of computation and eventually returns only two calculated fields:  _time and STORE_ID via the table command. The _time field is formatted exactly like the the b... See more...
I have a Splunk query that does a lot of computation and eventually returns only two calculated fields:  _time and STORE_ID via the table command. The _time field is formatted exactly like the the built-in _time field (e.g., "2022-01-17 23:50:25,897"). I want to do a timechart showing the count of how many times each unique STORE_ID appears in a given time bucket, using my calculated _time variable to fill the buckets.  What do I put in the timechart clause to accomplish this?  Thanks! Jonathan
All... Looking to see if anyone has any thoughts on trying to bring in different timestamp formats inside of the same sourcetype.  I am working on an issue where we are bringing Crowdstrike data whe... See more...
All... Looking to see if anyone has any thoughts on trying to bring in different timestamp formats inside of the same sourcetype.  I am working on an issue where we are bringing Crowdstrike data where they are just dumping data into S3 bucket.  Some of the data comes into buckets that have specific directories, so I can set sourcetyping at the source level for those:   However we have some data coming into the same bucket and the same file, but they may have different formats.  Examples of what we are seeing: "modified_time":"2022-01-10T23:58:25.865570789Z" "timestamp":"2022-01-21T20:37:37Z" We have tried defining a datetime.xml and have used the following props settings: [crowdstrike:edr] LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 30 SHOULD_LINEMERGE = false #TIME_FORMAT = %s%3N TIME_PREFIX = "timestamp":|"modified_time":|"_time":|"Time": #TIME_PREFIX = timestamp DATETIME_CONFIG = /etc/apps/fmac_crowdstrike_props/datetime.xml TRANSFORMS-filter-edr-splunkd = crowdstrike_filter_splunk,crowdstrike_filter_splunkforwarder,crowdstrike_filter_endofprocess TRUNCATE = 999999 disabled = false kv_mode = json Please let me know if you have any thoughts on this or ideas that will help.  Thanks!
Here is some background on what I am trying to accomplish, I have 3 separate devices that will be in any of 6 stages of activities throughout a day. I have a base search that will tell when a device ... See more...
Here is some background on what I am trying to accomplish, I have 3 separate devices that will be in any of 6 stages of activities throughout a day. I have a base search that will tell when a device changes states, what state it changes to, and what time this occurs. I would like to have a chart or graph that will tell me how long each device was in each state for a given day.  I can't put my search or data on the forum. 
Hello, I'm trying to search Splunk for user activity pertaining to logging into Splunk for X # of days. Everything I've tried so far returns some results but not all.  I've searched the _audit index... See more...
Hello, I'm trying to search Splunk for user activity pertaining to logging into Splunk for X # of days. Everything I've tried so far returns some results but not all.  I've searched the _audit index as well as |rest /services/authentication/httpauth-tokens | fields userName, timeAccessed |dedup userName sortby timeAccessed.  Does anyone have a search for this or a dashboard that would pull this information? I need: user, date last accessed at a minimum.   Thanks, Craig          
I have a dashboard that has 3 Inputs - "Change Type", time and text box. The "Change Type" is dynamically populated using "choice value" based on a search string. There are two change types, Config ... See more...
I have a dashboard that has 3 Inputs - "Change Type", time and text box. The "Change Type" is dynamically populated using "choice value" based on a search string. There are two change types, Config and Admin. Example: <form theme="dark"> <label>Admin and Config Change Reports</label> <description>Change Events</description> <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="tok_change" searchWhenChanged="true"> <label>Change Type:</label> <choice value="index=admin_changes <some other spl>">Admin change</choice> <choice value="index=config_changes <some other spl>">Config change</choice> The users much choose from a drop down, one of the above change choices. Once the choice is made a table is populated with the results. What I want to do is when the panel (table) is populated I want the "change type" to be the panel title. Thanks in advance.
I am trying to use the case match command with more than one option. I keep getting an error message regarding the parenthesis.. nothing is working.. Do not understand whats missing from the syntax. ... See more...
I am trying to use the case match command with more than one option. I keep getting an error message regarding the parenthesis.. nothing is working.. Do not understand whats missing from the syntax.   Here is the search --> | eval state_ack_error=case(match(_raw, "ACK\-CODE\=AA"), 1, match(_raw matches "STATUS\=SENT"), 1, 1=1, 0) Error message: Error in 'eval' command: The expression is malformed.
Would like to know timetable of Splunk Enterprise and the Splunk Universal Forwarder being support/compatible with Windows Server 2022/Windows 11?   Thank you
I would like to count the multifield in the table where it has similar values.  For Ex:  I need output like below for the COMPLETED_CERT_COUNT, It should only show the count of NOT_Expired training ... See more...
I would like to count the multifield in the table where it has similar values.  For Ex:  I need output like below for the COMPLETED_CERT_COUNT, It should only show the count of NOT_Expired training status.    I have tried |eval COMPLETED_CERT_COUNT=mvcount(if(TRAINING_STATUS=="Not_Expired")) and   | stats mvcount(eval(TRAINING_STATUS="Not_Expired")) as certcount by name . But Nothing worked out. Kindly share your suggestion 
user field is already present in data, but it is giving the wrong info, I want to extract the user field from raw logs. Field name should be "user" The field name "user" which I extracted is not sh... See more...
user field is already present in data, but it is giving the wrong info, I want to extract the user field from raw logs. Field name should be "user" The field name "user" which I extracted is not showing up in Interested fields as the field name user is already auto extracted. May I know how to make my extraction work with the field name "user"
Hi Everyone,  Can some one please guide to fix an issue on the below error while installing addon using cli. Used below command to install add on. Error is parameters must be in the form '-parameter... See more...
Hi Everyone,  Can some one please guide to fix an issue on the below error while installing addon using cli. Used below command to install add on. Error is parameters must be in the form '-parameter value' ./splunk install app splunk-add-on-for-unix-and-linux_701.tar /home/  
Hi, in my index I have a couple time fields that are returned via a simple search _time = 1/20/2022 1:38:55.000 PM (the Splunk-generated time) body.timestamp = 2022-01-20T21:38:45.7774493Z (the tr... See more...
Hi, in my index I have a couple time fields that are returned via a simple search _time = 1/20/2022 1:38:55.000 PM (the Splunk-generated time) body.timestamp = 2022-01-20T21:38:45.7774493Z (the transaction time from our log) I am trying to format the time output with the convert function but can only get the first result to return. | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(_time) AS timestamp = 2022-01-20 21:38:55 | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(body.timestamp) AS timestamp2 = none Am I missing something for the second timestamp to be returned? Thanks!
Hi, In the past (Splunk Enterprise v 7.x.x) I used the below search to run a report every few min. There were so many results that due to limitations I had to run them 1 day spans. I needed to do th... See more...
Hi, In the past (Splunk Enterprise v 7.x.x) I used the below search to run a report every few min. There were so many results that due to limitations I had to run them 1 day spans. I needed to do this for 6 months of data so I automated the process with a repeating report... I would run this search to create the first entries which is necessary of the next step index="app" sourcetype="api" type=log* | eval time=_time | sort time desc | table time type version | outputlookup append=false My_file.csv Then I created a report , set it to run every 1 or 2 minutes with the below search. It basically looks at the earliest date in My_file.csv file, then adjust the earliest and latest times for the main search. index="app" sourcetype="api" type=log* [| inputlookup My_file.csv | stats min(time) as mi | eval latest=(mi-0.001) | eval earliest=(latest-86400) | return earliest latest] | eval time=_time | table time type version | sort time desc | outputlookup append=true My_file.csv It just runs the search with the timeframe in my Splunk time picker. It doesn't seem to take the earliest and latest from my 'return' command in the subsearch. If I try running the subsearch only, then I do get a result... | inputlookup My_file.csv | stats min(time) as mi | eval latest=(mi-0.001) | eval earliest=(latest-86400) | return earliest latest Give me the below results, so I don't get why the value isn't used in the top search earliest="1642374033.873" latest="1642719633.873"   It works though if I do a map, but that's not a viable solution due to the high volumes... | inputlookup My_file.csv | stats min(time) as mi | eval latest=(mi-0.001) | eval earliest=(latest-86400) | table earliest latest | map maxsearches=10 search="search index="app" sourcetype="api" type=log* earliest="$earliest$" latest="$latest$""   What's frustrating is that this used to work and now I need to do the same exercise and I can't use it again. Does anybody have an idea why it's not working? Have you experience similar issues? Thanks
We've installed and configured the Azure add-on, and while it works, the various inputs seem to hang once or twice a day. For us, this is most noticeable with the device and user inputs (sourcetypes ... See more...
We've installed and configured the Azure add-on, and while it works, the various inputs seem to hang once or twice a day. For us, this is most noticeable with the device and user inputs (sourcetypes azure:aad:device and :user). I've set the add-on to DEBUG level logging, but there's nothing especially obvious. Environment: This add-on is running on a heavy forwarder, that exists almost exclusively to run API-based add-ons like this. It's a relatively-untaxed RHEL 8 VM. We're using version 3.2.0, the now-current version of the add-on. (We had the same problem with 3.1.1 at least. I'm not sure how far back the problem goes, but it's been an intermittent issue for at least a few months.) First: the add-on has what to me looks like a bug in the interval setting. We've set the interval to "300" -- this is labeled as the number of seconds between queries, but the logs show the queries are running closer to every 300 milliseconds. If we set it lower than 300, the time between queries seems to shorten as you would expect, but setting it higher than 300 doesn't seem to work. We've tried setting it to values like 5000, to see if we could trick the add-on to pulling every 5 seconds, but that didn't do what we hoped.) More important, though, is that the input periodically hangs. The normal behavior looks like this: 2022-01-20 16:24:39,314 DEBUG pid=3938342 tid=MainThread file=connectionpool.py:_make_request:461 | https://graph.microsoft.com:443 "GET /v1.0/devices/?$skiptoken=(token 1) HTTP/1.1" 200 None 2022-01-20 16:24:39,476 DEBUG pid=3938342 tid=MainThread file=base_modinput.py:log_debug:288 | _Splunk_ AAD devices nextLink URL (@odata.nextLink): https://graph.microsoft.com/v1.0/devices/?$skiptoken=(token 2) 2022-01-20 16:24:39,477 DEBUG pid=3938342 tid=MainThread file=base_modinput.py:log_debug:288 | _Splunk_ Getting proxy server. 2022-01-20 16:24:39,477 INFO pid=3938342 tid=MainThread file=setup_util.py:log_info:117 | Proxy is not enabled! 2022-01-20 16:24:39,479 DEBUG pid=3938342 tid=MainThread file=connectionpool.py:_new_conn:975 | Starting new HTTPS connection (1): graph.microsoft.com:443 2022-01-20 16:24:39,741 DEBUG pid=3938342 tid=MainThread file=connectionpool.py:_make_request:461 | https://graph.microsoft.com:443 "GET /v1.0/devices/?$skiptoken=(token 2) HTTP/1.1" 200 None Basically, the add-on makes a request with a given token, part of the output of that is to get a new token, then (interval) milliseconds later, it uses that token and the cycle starts again. Eventually, though, the add-on gets to the fifth line in the above (where it's starting a new connection), and... that's it. The add-on doesn't do anything until one of the Splunk admins gets the alert we set up, that says "hey there haven't been any new events of sourcetype X in index Y for a couple hours, maybe you should take a look". Sometimes, the inputs will hang just a few hours after a restart; sometimes they work just fine for weeks at a time. Logging into the heavy forwarder, and toggling the input to "Disabled" and right back to "Enabled" clears the issue. Presumably disabling the input kills off the underlying Python script, then re-enabling it launches a fresh instance. We've thought about scripting a regular restart of this add-on, but there doesn't seem to be a way in the CLI to do so, short of restarting the whole heavy forwarder. That's a really big hammer for a relatively small nail, so it's not our first choice. And given that the add-on doesn't hang on any predictable schedule, we don't think it's worth the trade-off (Plan 5 or plan 6 would probably be building a new heavy forwarder for JUST the Azure add-on, so a scheduled restart of Splunk as a whole won't impact any other add-ons. But since building a new machine incurs costs to our team, and how it's still an inelegant solution, it's probably the last-resort plan.) Aside from setting the add-on to "DEBUG," is there anything else I can do within the add-on to debug this? Anyone had problems like this before, and if you have, how did you work around them? Is the "interval" thing really a bug, and if so to whom should I report it?
Hello, I would like to know if its possible to upgrade from Splunk 6.2.1 to 8.2.4 having an enterprise perpetual license? If not, what do I have to do? Thank you all in advance
Unable to see my host in index=_interospection /_internal  after runing the above query in the same host I can't see the hostname. Unable to see host ES   
Hi, In the following log entries, I wanted to extract uri in a specific format: log: a_level="INFO", a_time="null", a_type="type", a_msg="Method=GET,Uri=http://monolith-xxx.abc.com/v2/clients?ski... See more...
Hi, In the following log entries, I wanted to extract uri in a specific format: log: a_level="INFO", a_time="null", a_type="type", a_msg="Method=GET,Uri=http://monolith-xxx.abc.com/v2/clients?skip=0top=100,MediaType=null,XRemoteIP=null" log: a_level="INFO", a_time="null", a_type="type", a_msg="Method=GET,Uri=http://monolith-xxx.abc.com/v1/clients/234,MediaType=null,XRemoteIP=null" log: a_level="INFO", a_time="null", a_type="type", a_msg="Method=GET,Uri=http://monolith-xxx.abc.com/v1/users/123,MediaType=null,XRemoteIP=null" For uri, I wanted the full extract until "?" or ",". Also remove and guids and digits from URL except for "/v1/","/v2/" http://monolith-xxx.abc.com/v2/clients http://monolith-xxx.abc.com/v1/clients/ http://monolith-xxx.abc.com/v1/users/ My current splunk query is as below: index=aws_abc env=prd-01 uri Method StatusCode ResponseTimeMs | rex field=log "ResponseTimeMs=(?<ResponseTimeMs>\d+),StatusCode=(?<StatusCode>\d+)" | rex field=log "\"?Method\"?\=(?<Method>[^,]*)" | rex field=log "Uri=(?<uri>[^\,]+)" | rex field=uri mode=sed "s/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}|\d*//g" | table uri,Method,StatusCode,ResponseTimeMs I get value in the table for all 4 but uri in table shows as below http://monolith-xxx.abc.com/v/clients?isactive=true http://monolith-xxx.abc.com/v/users/?filter=(Name%startswith%'H') Expected Output: http://monolith-xxx.abc.com/v2/clients http://monolith-xxx.abc.com/v2/users/ Please help. Thanks
Hello, We have an oracle database schema with mutiple tables. All these tables have a column called idversion, and each has a view that shows only the version that was specified into the table sesio... See more...
Hello, We have an oracle database schema with mutiple tables. All these tables have a column called idversion, and each has a view that shows only the version that was specified into the table sesio_te. So the expected way to work with these tables is to INSERT INTO "XXXX"."SESIO_TE" (sessionvalue, usuario, version_actual) VALUES ('test version', 'splunk', 21)  and then SELECT * FROM "XXXX"."PRESU_IN_AUD_VI" WHERE ... What I want to achieve is a dashboard that shows some data in these tables on demand from the DB. Not indexed data. So if I have multiple users watching this dashboard and they are asking for different versions I need to update the version before quering the views. But if you try to do   INSERT INTO "XXXX"."SESIO_TE" (sessionvalue, usuario, version_actual) VALUES ('test version', 'splunk', 21); SELECT * FROM "XXXX"."PRESU_IN" WHERE ... ; in the same dbxquery command it fails saying java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not properly ended. Any ideas how to do this? Thank you in advance.  
I need help regarding comparise a ISO 8601 date field with a specific date. Below is a simple example: index=devices | table device_last_seen Results: device_last_seen 2022-01-21T13:09:58Z ... See more...
I need help regarding comparise a ISO 8601 date field with a specific date. Below is a simple example: index=devices | table device_last_seen Results: device_last_seen 2022-01-21T13:09:58Z 2022-01-21T13:10:06Z 2022-01-17T14:56:00Z 2022-01-16T10:57:18Z   My goal is to show only the devices reported in the last 24h. It should be like this: device_last_seen 2022-01-21T13:09:58Z 2022-01-21T13:10:06Z   However the search below didn´t return any results. index=devices | eval last24h=relative_time(now(), "-1d") | where device_last_seen > last24h | table device_last_seen Thank in advance for your help.
Hello, In which direction must firewall openings be configured for indexers and search heads to be able to communicate with the license manager and fetch license etc.? Is it only  <search_head/index... See more...
Hello, In which direction must firewall openings be configured for indexers and search heads to be able to communicate with the license manager and fetch license etc.? Is it only  <search_head/indexer> --> License manager (on port 8089) or both ways, so <search_head/indexer> --> <License_manager> AND  <License_manager> --> <search_head/indexer> (on port 8089). I have seen network topologies saying different things regarding this. Is one way enough or do we need both ways on port 8089? Thank you!