All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Getting below errors while importing splunklib and splunk-sdk python packages. Any resolutions please? Building wheels for collected packages: pycrypto Building wheel for pycrypto (pyproject.toml) ... See more...
Getting below errors while importing splunklib and splunk-sdk python packages. Any resolutions please? Building wheels for collected packages: pycrypto Building wheel for pycrypto (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for pycrypto (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [28 lines of output] warning: GMP or MPIR library not found; Not building Crypto.PublicKey._fastmath. winrand.c ............   [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pycrypto Failed to build pycrypto ERROR: Could not build wheels for pycrypto, which is required to install pyproject.toml-based projects
Hello @PickleRick   @bowesmana  @jotne  When I ran the query with the summaryindex command, the data from the query got pushed just fine like my previous response to kiran. When I ran the que... See more...
Hello @PickleRick   @bowesmana  @jotne  When I ran the query with the summaryindex command, the data from the query got pushed just fine like my previous response to kiran. When I ran the query with the collect command, the data from the query did not get pushed I could see the the _raw data when I used testmode=true, but when I set testmode=flag, it ran, but the data didn't show up (although I already set to all time) Different issue: I also tried to set _time to info_max_time by setting addtime=false and used this command but it always set to the current time.  (I am aware that by default it's set to info_min_time, if addtime=true) | eval _time=strftime(info_max_time,"%m/%d/%y %I:%M:%S %p") - I can open another post for this if it's needed.   Please suggest.  I appreciate your help.   Thank you
Thank you for your information. Actually, my company is an Splunk Partner so I can login to demo sites. Moreover, I'm still seeking for a RAW commonly data source for customize and fully control to m... See more...
Thank you for your information. Actually, my company is an Splunk Partner so I can login to demo sites. Moreover, I'm still seeking for a RAW commonly data source for customize and fully control to my own Splunk. Regard.
Hello all, I'm trying to get a duration between the first "started" event, and the first "connected" event following started, grouped by each user id.   The Data I'm trying to get an event that i... See more...
Hello all, I'm trying to get a duration between the first "started" event, and the first "connected" event following started, grouped by each user id.   The Data I'm trying to get an event that is going to be structured like the following (assume these have all have real timestamps. I am abbreviating it to be short. The item numbers on the left are for annotation purposes only) (item no. for annotation purposes only) userId status _time (abbreviated) 0 1 started 00:00 1 1 connected 00:05 2 2 started 00:30 3 2 connected 00:40 4 2 connected 01:30 5 4 started 02:00 6 3 connected 02:05 7 3 started 02:10 8 3 connected 02:20 9 4 connected 02:30 10 5 started 3:00 What i'm looking to achieve: A) I need to make sure i start the clock whenever the user has a "started" state.  (e.g., item no. 6 should be neglected) B) It must take the first connected event following "started". (e.g., item no. 3 is the end item, with item no.4 being ignored completely) C) I want to graph the number of users bucketed by intervals of 15 seconds. D) There must be a start and connected event. (e.g. userId 5 would not be added) How would i approach this?  I tried to do the following: ... status="started" OR status="connected" | stats range(_time) AS duration BY userId | where duration > 0 | bin span 15 duration | stats dc(userid) as Users by duration   But this isn't  quite doing what I want it to do.  And, I also get events where there's no duration.
Hi Splunk Community,  I'm trying to list all splunk local users (authentication system = splunk) . The below search lists all users SAML and Splunk but I'm only looking for local accounts. | rest /... See more...
Hi Splunk Community,  I'm trying to list all splunk local users (authentication system = splunk) . The below search lists all users SAML and Splunk but I'm only looking for local accounts. | rest /services/authentication/users splunk_server=local | fields roles title realname | rename title as username Thanks!
I missed your comment re: accelerated data models earlier. The field values should be available at search time, either from _raw or tsidx, and then stored in the summary index. Off the top of my head... See more...
I missed your comment re: accelerated data models earlier. The field values should be available at search time, either from _raw or tsidx, and then stored in the summary index. Off the top of my head, I don't know if the segmenters impact INDEXED_EXTRACTIONS = w3c, but they shouldn't impact transforms-based indexed extractions or search-time field extractions from other source types.
Also, yes, your proposal to install Splunk Enterprise on Server B and Splunk Universal Forwarder on Server C will allow you to run queries against Server A, assuming you have connectivity and a datab... See more...
Also, yes, your proposal to install Splunk Enterprise on Server B and Splunk Universal Forwarder on Server C will allow you to run queries against Server A, assuming you have connectivity and a database account with appropriate access, and forward the evens to Server C and downstream to Splunk Cloud. Note, however, that sys.fn_get_audit_file does not scale. If you query .sqlaudit files through this function, your SQL Server administrator should store only the .sqlaudit files necessary to allow Splunk to execute queries and index events in a timely manner. I.e. Rotation and retention of live .sqlaudit files should be configured with Splunk and fn_get_audit_file performance in mind. You'll need to test performance in your environment to understand its constraints.
This should work on Linux: Install PowerShell Core. As the Splunk Universal Forwarder user--splunk or splunkfwd--install the SqlServer module as before: $ /bin/pwsh PS> Install-Module SqlServer ... See more...
This should work on Linux: Install PowerShell Core. As the Splunk Universal Forwarder user--splunk or splunkfwd--install the SqlServer module as before: $ /bin/pwsh PS> Install-Module SqlServer If Splunk Universal Forwarder runs as root, install the SqlServer module as root. Copy Stream-SqlAudit.ps1 to an appropriate directory, e.g. $SPLUNK_HOME/bin/scripts. Note the addition of the interpreter directive on the first line. #!/bin/pwsh $file = New-TemporaryFile $output = $file.Open([System.IO.FileMode]::Append, [System.IO.FileAccess]::Write) $stdin = [System.Console]::OpenStandardInput() $stdout = [System.Console]::Out $buffer = New-Object byte[] 16384 [int]$bytes = 0 while (($bytes = $stdin.Read($buffer, 0, $buffer.Length)) -gt 0) { $output.Write($buffer, 0, $bytes) } $output.Flush() $output.Close() Read-SqlXEvent -FileName "$($file.DirectoryName)\$($file.Name)" | %{ $event = $_.Timestamp.UtcDateTime.ToString("o") $_.Fields | %{ if ($_.Key -eq "permission_bitmask") { $event += " permission_bitmask=`"0x$([System.BitConverter]::ToInt64($_.Value, 0).ToString("x16"))`"" } elseif ($_.Key -like "*_sid") { $sid = $null $event += " $($_.Key)=`"" try { $sid = New-Object System.Security.Principal.SecurityIdentifier($_.Value, 0) $event += "$($sid.ToString())`"" } catch { $event += "`"" } } else { $event += " $($_.Key)=`"$([System.Web.HttpUtility]::JavaScriptStringEncode($_.Value.ToString()))`"" } } $stdout.WriteLine($event) } $file.Delete() Make sure the file is executable, e.g.: $ chmod 0750 $SPLUNK_HOME/bin/scripts/Stream-SqlAudit.ps1 Update props.conf: [source::....sqlaudit] unarchive_cmd = $SPLUNK_HOME/bin/scripts/Stream-SqlAudit.ps1 unarchive_cmd_start_mode = direct sourcetype = preprocess-sqlaudit NO_BINARY_CHECK = true [preprocess-sqlaudit] invalid_cause = archive is_valid = False LEARN_MODEL = false [sqlaudit] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+) TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%N%Z MAX_TIMESTAMP_LOOKAHEAD = 30 KV_MODE = auto_escaped Update inputs.conf: [monitor:///tmp/*.sqlaudit] index = main sourcetype = sqlaudit /tmp is just an example. Use whatever file system and path makes the most sense for your deployment. The Splunk Universal Forwarder user must have read and execute access to all directories in the path and read access to the .sqlaudit files. As before, your temporary directory should have enough free space to accommodate your largest .sqlaudit file. Depending on your Splunk configuration, Splunk Universal Forwarder may process multiple files concurrently. If that's the case, ensure you have enough free space for all temporary files. Finally, let us know how it goes!
Thanks. That resolved the issue.
Thank you @bowesmana . I got sick of beating my head against a wall and put in a workaround of sorts. The .csv in question came from an outputlookup I ran against some indexed data. For some reason, ... See more...
Thank you @bowesmana . I got sick of beating my head against a wall and put in a workaround of sorts. The .csv in question came from an outputlookup I ran against some indexed data. For some reason, I could not filter out non-alphanumeric characters from the .csv itself, but I could with the indexed data.. So I filtered it out with a rex statement, then re-ran my outputlookup to create a new .csv. Thank you for taking the time to reply!
The first step is to make sure the data is valid JSON because the spath command will not work with invalid JSON.  jsonlint.com rejected the sample object. Here is a run-anywhere example that extract... See more...
The first step is to make sure the data is valid JSON because the spath command will not work with invalid JSON.  jsonlint.com rejected the sample object. Here is a run-anywhere example that extracts payload as a single field. | makeresults format=json data="[{\"content\" : { \"jobName\" : \"PAY\", \"region\" : \"NZ\", \"payload\" : [ { \"Aresults\" : [ { \"count\" : \"6\", \"errorMessage\" : null, \"filename\" : \"9550044.csv\" } ] }, { \"Bresults\" : [ { \"count\" : \"6\", \"errorMessage\" : null, \"filename\" : \"9550044.csv\" } ] } ] }} ]" | spath output=payload content.payload{}
If your search is running every minute then it will run at some point AFTER 7:01 - what is your search window you are looking at.  If you are looking at from -1m to now then the search window will b... See more...
If your search is running every minute then it will run at some point AFTER 7:01 - what is your search window you are looking at.  If you are looking at from -1m to now then the search window will be 1 minute prior to the time of the search up to the time of the search running. If your search window is -1m@m to @m then it will be searching from 7:00 to 7:01. If you have zero LAG time between your ingested data being created at its source and the time it is indexed in Splunk then you should get your 20 results, but imagine if all those events that are generated between 7:00 and 7:01 actually arrive in Splunk and get indexed between 7:01 and 7:02 - then there will be 0 events when your search runs between 7:00 and 7:01. Then at 7:02 when the search runs again and it looks for events with time from 7:01 to 7:02, there are also 0 events, because the time stamp for the 20 events were between 7:00 and 7:01. This is important when you create alerts - Splunk can never be totally realtime, so you need to understand any lag in your event ingestion time and create your search window accordingly. This often means that when you run your search, the window should be a little in the past, e.g. from -3m@m to -2m@m so you are looking from 3 minutes behind to 2 minutes behing - this gives the events time to get created at source, sent to Splunk and then indexed. You can check this issue by looking at _time and _indextime fields in Splunk, so you can do | eval ixt=strftime(_indextime, "%F %T") | eval lag=_indextime - _time | table _time ixt lag to see what your data lag is.  
I believe the correct syntax is search time_difference_hours > 4 but you can also put that in the search rather than in the alert with | where time_difference_hours > 4 and just trigger on number... See more...
I believe the correct syntax is search time_difference_hours > 4 but you can also put that in the search rather than in the alert with | where time_difference_hours > 4 and just trigger on number of results.
Does anybody have a better doc for this page, I think it's a copy and paste gone wrong. Th UiPath configuration is mixed with the Splunk UF configuration for Windows. rpm_app_for_splunk/docs/UiPath_... See more...
Does anybody have a better doc for this page, I think it's a copy and paste gone wrong. Th UiPath configuration is mixed with the Splunk UF configuration for Windows. rpm_app_for_splunk/docs/UiPath_orchestrator_nLog.MD at main · splunk/rpm_app_for_splunk · GitHub
Without knowing what the characters actually are, I can suggest this eval logic that may help you clean up the data | eval tmpVM=split(VM, "") | eval newVM=mvjoin(mvmap(tmpVM, if(tmpVM>=" " AND tmpV... See more...
Without knowing what the characters actually are, I can suggest this eval logic that may help you clean up the data | eval tmpVM=split(VM, "") | eval newVM=mvjoin(mvmap(tmpVM, if(tmpVM>=" " AND tmpVM<="z", tmpVM, null())), "") which will break the string up into the individual characters and then the mvmap will check that each character is between space and lower case z (which will cover most of the printable ASCII chars) and join it back together again. If the goal is to fix up the csv, then this should work and you can rewrite the csv, but if this is a general problem with the CSV being written regularly, then you should try to see if you can understand the data that's getting in. It sounds like it could be an encoding issue and there may be some spurious UTF (8 or 16) characters in there. Those "‚Äã" character are all valid characters but they would show up as such in Splunk, so Excel is just doing its best. You could also add in x=len(VM) to see how many additional characters are there, but you will also see the tmpVM variable in the above eval snippet shows you what Splunk thinks of the data.
The problem that seriesColours is just a list of colours that are used in order, so if there are two rows, the FAIL row is always first, so the first colour in the series applies. I believe the only... See more...
The problem that seriesColours is just a list of colours that are used in order, so if there are two rows, the FAIL row is always first, so the first colour in the series applies. I believe the only way to solve this is by adding a <done> clause after the search to calculate what the series colours should be and then use the token, like this <search> ... <done> <eval token="series_colour">case($job.resultCount$=2, "#BA0F30,#116530", $job.resultCount$=1 AND match($result.chart$,"FAIL"), "#BA0F30", $job.resultCount$=1 AND match($result.chart$,"PASS"), "#116530")</eval> </done> </search> ... <option name="charting.seriesColors">[$series_colour$]</option> So, the eval part of the done clause will check if there are two rows, then the series has two values, otherwise it checks the chart field to see if it is FAIL or PASS and sets the single series values as appropriate. Then in the seriesColors statement, use the token.  
Is this a classic or dashboard studio dashboard? Is TimeRangepanel the name of the dashboard or something else? How are you populating the dashboard panels - if it's from a saved panel or the panel... See more...
Is this a classic or dashboard studio dashboard? Is TimeRangepanel the name of the dashboard or something else? How are you populating the dashboard panels - if it's from a saved panel or the panel is coming from a report, is the report private or is it a public report?  It sounds like the customer does not have permissions to see some part of the dashboard.  
Use this | eval overallpass=if('chassis ready'="yes" AND result="pass" AND synchronize="yes", "OK", "Not Okay") | stats values(overallpass) as overallpass by hostname NB: The chassis ready must be ... See more...
Use this | eval overallpass=if('chassis ready'="yes" AND result="pass" AND synchronize="yes", "OK", "Not Okay") | stats values(overallpass) as overallpass by hostname NB: The chassis ready must be surrounded by SINGLE quotes as eval needs to understand the space is part of the field name. The values must be double quoted.  
Here I am, over 3 years later, finding my own answer to help me out again  
Hey @isoutamo , May be I found the fix I noticed inputs.conf on splunk indexer side was not having port mentioned on one of the block  Added "[splunktcp:<PORT>]" in $SPLUNK_BASE/etc/system/local o... See more...
Hey @isoutamo , May be I found the fix I noticed inputs.conf on splunk indexer side was not having port mentioned on one of the block  Added "[splunktcp:<PORT>]" in $SPLUNK_BASE/etc/system/local on indexers This fixed issue