All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

_time in the data is ignored with collect and _time should only ever be an epoch anyway - it's a Splunk reserved field, so making it a string is a bad idea. Are you collecting a _raw field or are yo... See more...
_time in the data is ignored with collect and _time should only ever be an epoch anyway - it's a Splunk reserved field, so making it a string is a bad idea. Are you collecting a _raw field or are you collecting fields without _raw. Are you specifying an index to collect to? What's your collect command? Are you running this as an ad-hoc search or as a scheduled saved search? If you are not specifying _raw the first value in the line of data collected will be the one parsed for the timestamp, hence addtime will add info_* fields to the start of the data. I find the safest way to make an event if you want control over _time is to do this and only collect _raw and ensures that my time stamp is the only one I want. | eval _raw=printf("_time=%d, ", your_epoch_time_field) | foreach "*" [| eval _raw=_raw.case(isnull('<<FIELD>>'),"", mvcount('<<FIELD>>')>1,", <<FIELD>>=\"".mvjoin('<<FIELD>>',"###")."\"", true(), ", <<FIELD>>=\"".'<<FIELD>>'."\"") | fields - "<<FIELD>>" ]  This simply builds a _raw field with null fields ignored and other fields quoted. It also flattens multi-value fields. If you have access to the underlying OS you can use the spool flag so the file is left in the file system and you can go and see the real file that would be ingested to the index.
@Egyas Hello, You can drop the events using props.conf and transforms.conf. So, first thing you have to match the events which one you want to drop using regex. Let's say if you want to drop the even... See more...
@Egyas Hello, You can drop the events using props.conf and transforms.conf. So, first thing you have to match the events which one you want to drop using regex. Let's say if you want to drop the event called "acct=appuser", write the regex for that and apply the props.conf and transforms.conf and send those data to the null queue.  Example:  props.conf [source::xxxxx] TRANSFORMS-set=setnull Transforms.conf  [setnull] REGEX = <your regex> i.e., acct=appuser DEST_KEY = queue FORMAT = nullQueue https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Propsconf#props.conf.example  https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Transformsconf  * NOTE: Keys are case-sensitive. Use the following keys exactly as they appear. queue : Specify which queue to send the event to (can be nullQueue, indexQueue). * indexQueue is the usual destination for events going through the transform-handling processor. * nullQueue is a destination which causes the events to be dropped entirely. _raw  : The raw text of the event. _meta : A space-separated list of metadata for an event. _time : The timestamp of the event, in seconds since 1/1/1970 UTC.   TRANSFORMS-<class> = <transform_stanza_name>, <transform_stanza_name2>,... * Used for creating indexed fields (index-time field extractions). * <class> is a unique literal string that identifies the namespace of the field you're extracting. **Note:** <class> values do not have to follow field name syntax restrictions. You can use characters other than a-z, A-Z, and 0-9, and spaces are allowed. <class> values are not subject to key cleaning. * <transform_stanza_name> is the name of your stanza from transforms.conf. * Use a comma-separated list to apply multiple transform stanzas to a single TRANSFORMS extraction. Splunk software applies them in the list order. For example, this sequence ensures that the [yellow] transform stanza gets applied first, then [blue], and then [red]: [source::color_logs] TRANSFORMS-colorchange = yellow, blue, red * See the RULESET-<class> setting for additional index-time transformation options.        
The data gets ingested one time only when the inputs are first configured. The interval for the inputs have been set as 3600 still no events are written.   Internal logs shows that inputs are running... See more...
The data gets ingested one time only when the inputs are first configured. The interval for the inputs have been set as 3600 still no events are written.   Internal logs shows that inputs are running but no new events are logged.  
Can anyone please provide me .js file for displaying a popup in my splunk dashboard.. ?
This is better achieved with transaction. | transaction userId startswith="status=started" endswith="status=connected"  
Getting below errors while importing splunklib and splunk-sdk python packages. Any resolutions please? Building wheels for collected packages: pycrypto Building wheel for pycrypto (pyproject.toml) ... See more...
Getting below errors while importing splunklib and splunk-sdk python packages. Any resolutions please? Building wheels for collected packages: pycrypto Building wheel for pycrypto (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for pycrypto (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [28 lines of output] warning: GMP or MPIR library not found; Not building Crypto.PublicKey._fastmath. winrand.c ............   [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pycrypto Failed to build pycrypto ERROR: Could not build wheels for pycrypto, which is required to install pyproject.toml-based projects
Hello @PickleRick   @bowesmana  @jotne  When I ran the query with the summaryindex command, the data from the query got pushed just fine like my previous response to kiran. When I ran the que... See more...
Hello @PickleRick   @bowesmana  @jotne  When I ran the query with the summaryindex command, the data from the query got pushed just fine like my previous response to kiran. When I ran the query with the collect command, the data from the query did not get pushed I could see the the _raw data when I used testmode=true, but when I set testmode=flag, it ran, but the data didn't show up (although I already set to all time) Different issue: I also tried to set _time to info_max_time by setting addtime=false and used this command but it always set to the current time.  (I am aware that by default it's set to info_min_time, if addtime=true) | eval _time=strftime(info_max_time,"%m/%d/%y %I:%M:%S %p") - I can open another post for this if it's needed.   Please suggest.  I appreciate your help.   Thank you
Thank you for your information. Actually, my company is an Splunk Partner so I can login to demo sites. Moreover, I'm still seeking for a RAW commonly data source for customize and fully control to m... See more...
Thank you for your information. Actually, my company is an Splunk Partner so I can login to demo sites. Moreover, I'm still seeking for a RAW commonly data source for customize and fully control to my own Splunk. Regard.
Hello all, I'm trying to get a duration between the first "started" event, and the first "connected" event following started, grouped by each user id.   The Data I'm trying to get an event that i... See more...
Hello all, I'm trying to get a duration between the first "started" event, and the first "connected" event following started, grouped by each user id.   The Data I'm trying to get an event that is going to be structured like the following (assume these have all have real timestamps. I am abbreviating it to be short. The item numbers on the left are for annotation purposes only) (item no. for annotation purposes only) userId status _time (abbreviated) 0 1 started 00:00 1 1 connected 00:05 2 2 started 00:30 3 2 connected 00:40 4 2 connected 01:30 5 4 started 02:00 6 3 connected 02:05 7 3 started 02:10 8 3 connected 02:20 9 4 connected 02:30 10 5 started 3:00 What i'm looking to achieve: A) I need to make sure i start the clock whenever the user has a "started" state.  (e.g., item no. 6 should be neglected) B) It must take the first connected event following "started". (e.g., item no. 3 is the end item, with item no.4 being ignored completely) C) I want to graph the number of users bucketed by intervals of 15 seconds. D) There must be a start and connected event. (e.g. userId 5 would not be added) How would i approach this?  I tried to do the following: ... status="started" OR status="connected" | stats range(_time) AS duration BY userId | where duration > 0 | bin span 15 duration | stats dc(userid) as Users by duration   But this isn't  quite doing what I want it to do.  And, I also get events where there's no duration.
Hi Splunk Community,  I'm trying to list all splunk local users (authentication system = splunk) . The below search lists all users SAML and Splunk but I'm only looking for local accounts. | rest /... See more...
Hi Splunk Community,  I'm trying to list all splunk local users (authentication system = splunk) . The below search lists all users SAML and Splunk but I'm only looking for local accounts. | rest /services/authentication/users splunk_server=local | fields roles title realname | rename title as username Thanks!
I missed your comment re: accelerated data models earlier. The field values should be available at search time, either from _raw or tsidx, and then stored in the summary index. Off the top of my head... See more...
I missed your comment re: accelerated data models earlier. The field values should be available at search time, either from _raw or tsidx, and then stored in the summary index. Off the top of my head, I don't know if the segmenters impact INDEXED_EXTRACTIONS = w3c, but they shouldn't impact transforms-based indexed extractions or search-time field extractions from other source types.
Also, yes, your proposal to install Splunk Enterprise on Server B and Splunk Universal Forwarder on Server C will allow you to run queries against Server A, assuming you have connectivity and a datab... See more...
Also, yes, your proposal to install Splunk Enterprise on Server B and Splunk Universal Forwarder on Server C will allow you to run queries against Server A, assuming you have connectivity and a database account with appropriate access, and forward the evens to Server C and downstream to Splunk Cloud. Note, however, that sys.fn_get_audit_file does not scale. If you query .sqlaudit files through this function, your SQL Server administrator should store only the .sqlaudit files necessary to allow Splunk to execute queries and index events in a timely manner. I.e. Rotation and retention of live .sqlaudit files should be configured with Splunk and fn_get_audit_file performance in mind. You'll need to test performance in your environment to understand its constraints.
This should work on Linux: Install PowerShell Core. As the Splunk Universal Forwarder user--splunk or splunkfwd--install the SqlServer module as before: $ /bin/pwsh PS> Install-Module SqlServer ... See more...
This should work on Linux: Install PowerShell Core. As the Splunk Universal Forwarder user--splunk or splunkfwd--install the SqlServer module as before: $ /bin/pwsh PS> Install-Module SqlServer If Splunk Universal Forwarder runs as root, install the SqlServer module as root. Copy Stream-SqlAudit.ps1 to an appropriate directory, e.g. $SPLUNK_HOME/bin/scripts. Note the addition of the interpreter directive on the first line. #!/bin/pwsh $file = New-TemporaryFile $output = $file.Open([System.IO.FileMode]::Append, [System.IO.FileAccess]::Write) $stdin = [System.Console]::OpenStandardInput() $stdout = [System.Console]::Out $buffer = New-Object byte[] 16384 [int]$bytes = 0 while (($bytes = $stdin.Read($buffer, 0, $buffer.Length)) -gt 0) { $output.Write($buffer, 0, $bytes) } $output.Flush() $output.Close() Read-SqlXEvent -FileName "$($file.DirectoryName)\$($file.Name)" | %{ $event = $_.Timestamp.UtcDateTime.ToString("o") $_.Fields | %{ if ($_.Key -eq "permission_bitmask") { $event += " permission_bitmask=`"0x$([System.BitConverter]::ToInt64($_.Value, 0).ToString("x16"))`"" } elseif ($_.Key -like "*_sid") { $sid = $null $event += " $($_.Key)=`"" try { $sid = New-Object System.Security.Principal.SecurityIdentifier($_.Value, 0) $event += "$($sid.ToString())`"" } catch { $event += "`"" } } else { $event += " $($_.Key)=`"$([System.Web.HttpUtility]::JavaScriptStringEncode($_.Value.ToString()))`"" } } $stdout.WriteLine($event) } $file.Delete() Make sure the file is executable, e.g.: $ chmod 0750 $SPLUNK_HOME/bin/scripts/Stream-SqlAudit.ps1 Update props.conf: [source::....sqlaudit] unarchive_cmd = $SPLUNK_HOME/bin/scripts/Stream-SqlAudit.ps1 unarchive_cmd_start_mode = direct sourcetype = preprocess-sqlaudit NO_BINARY_CHECK = true [preprocess-sqlaudit] invalid_cause = archive is_valid = False LEARN_MODEL = false [sqlaudit] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+) TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%N%Z MAX_TIMESTAMP_LOOKAHEAD = 30 KV_MODE = auto_escaped Update inputs.conf: [monitor:///tmp/*.sqlaudit] index = main sourcetype = sqlaudit /tmp is just an example. Use whatever file system and path makes the most sense for your deployment. The Splunk Universal Forwarder user must have read and execute access to all directories in the path and read access to the .sqlaudit files. As before, your temporary directory should have enough free space to accommodate your largest .sqlaudit file. Depending on your Splunk configuration, Splunk Universal Forwarder may process multiple files concurrently. If that's the case, ensure you have enough free space for all temporary files. Finally, let us know how it goes!
Thanks. That resolved the issue.
Thank you @bowesmana . I got sick of beating my head against a wall and put in a workaround of sorts. The .csv in question came from an outputlookup I ran against some indexed data. For some reason, ... See more...
Thank you @bowesmana . I got sick of beating my head against a wall and put in a workaround of sorts. The .csv in question came from an outputlookup I ran against some indexed data. For some reason, I could not filter out non-alphanumeric characters from the .csv itself, but I could with the indexed data.. So I filtered it out with a rex statement, then re-ran my outputlookup to create a new .csv. Thank you for taking the time to reply!
The first step is to make sure the data is valid JSON because the spath command will not work with invalid JSON.  jsonlint.com rejected the sample object. Here is a run-anywhere example that extract... See more...
The first step is to make sure the data is valid JSON because the spath command will not work with invalid JSON.  jsonlint.com rejected the sample object. Here is a run-anywhere example that extracts payload as a single field. | makeresults format=json data="[{\"content\" : { \"jobName\" : \"PAY\", \"region\" : \"NZ\", \"payload\" : [ { \"Aresults\" : [ { \"count\" : \"6\", \"errorMessage\" : null, \"filename\" : \"9550044.csv\" } ] }, { \"Bresults\" : [ { \"count\" : \"6\", \"errorMessage\" : null, \"filename\" : \"9550044.csv\" } ] } ] }} ]" | spath output=payload content.payload{}
If your search is running every minute then it will run at some point AFTER 7:01 - what is your search window you are looking at.  If you are looking at from -1m to now then the search window will b... See more...
If your search is running every minute then it will run at some point AFTER 7:01 - what is your search window you are looking at.  If you are looking at from -1m to now then the search window will be 1 minute prior to the time of the search up to the time of the search running. If your search window is -1m@m to @m then it will be searching from 7:00 to 7:01. If you have zero LAG time between your ingested data being created at its source and the time it is indexed in Splunk then you should get your 20 results, but imagine if all those events that are generated between 7:00 and 7:01 actually arrive in Splunk and get indexed between 7:01 and 7:02 - then there will be 0 events when your search runs between 7:00 and 7:01. Then at 7:02 when the search runs again and it looks for events with time from 7:01 to 7:02, there are also 0 events, because the time stamp for the 20 events were between 7:00 and 7:01. This is important when you create alerts - Splunk can never be totally realtime, so you need to understand any lag in your event ingestion time and create your search window accordingly. This often means that when you run your search, the window should be a little in the past, e.g. from -3m@m to -2m@m so you are looking from 3 minutes behind to 2 minutes behing - this gives the events time to get created at source, sent to Splunk and then indexed. You can check this issue by looking at _time and _indextime fields in Splunk, so you can do | eval ixt=strftime(_indextime, "%F %T") | eval lag=_indextime - _time | table _time ixt lag to see what your data lag is.  
I believe the correct syntax is search time_difference_hours > 4 but you can also put that in the search rather than in the alert with | where time_difference_hours > 4 and just trigger on number... See more...
I believe the correct syntax is search time_difference_hours > 4 but you can also put that in the search rather than in the alert with | where time_difference_hours > 4 and just trigger on number of results.
Does anybody have a better doc for this page, I think it's a copy and paste gone wrong. Th UiPath configuration is mixed with the Splunk UF configuration for Windows. rpm_app_for_splunk/docs/UiPath_... See more...
Does anybody have a better doc for this page, I think it's a copy and paste gone wrong. Th UiPath configuration is mixed with the Splunk UF configuration for Windows. rpm_app_for_splunk/docs/UiPath_orchestrator_nLog.MD at main · splunk/rpm_app_for_splunk · GitHub
Without knowing what the characters actually are, I can suggest this eval logic that may help you clean up the data | eval tmpVM=split(VM, "") | eval newVM=mvjoin(mvmap(tmpVM, if(tmpVM>=" " AND tmpV... See more...
Without knowing what the characters actually are, I can suggest this eval logic that may help you clean up the data | eval tmpVM=split(VM, "") | eval newVM=mvjoin(mvmap(tmpVM, if(tmpVM>=" " AND tmpVM<="z", tmpVM, null())), "") which will break the string up into the individual characters and then the mvmap will check that each character is between space and lower case z (which will cover most of the printable ASCII chars) and join it back together again. If the goal is to fix up the csv, then this should work and you can rewrite the csv, but if this is a general problem with the CSV being written regularly, then you should try to see if you can understand the data that's getting in. It sounds like it could be an encoding issue and there may be some spurious UTF (8 or 16) characters in there. Those "‚Äã" character are all valid characters but they would show up as such in Splunk, so Excel is just doing its best. You could also add in x=len(VM) to see how many additional characters are there, but you will also see the tmpVM variable in the above eval snippet shows you what Splunk thinks of the data.