All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Have you looked from mongod.log (or something similar) why mongod didn’t start? r. Ismo
This seems to be a bug on that dashboard. There shouldn’t be any fixed days, instead it should offer time picker for you to select needed time span and then use it. Probably the easiest way is just cr... See more...
This seems to be a bug on that dashboard. There shouldn’t be any fixed days, instead it should offer time picker for you to select needed time span and then use it. Probably the easiest way is just create your own dashboard with time picker and fix it that way? Please create also an support case for it as Splunk should have a dashboard which shows that statistics based n their current license policy/model.
One comment. User creation happened just when installing with package manager. If you are using tar package you must do those user by yourself if you want to use those.
I can have the result with 60 days without changing index retention. I do not know why. But I have to change the -30 days to -60 days in Splunk-owned form that does not allow editing. Guess I w... See more...
I can have the result with 60 days without changing index retention. I do not know why. But I have to change the -30 days to -60 days in Splunk-owned form that does not allow editing. Guess I will keep this in mind and see if Splunk will change any thing in the coming versions. best regards Altin
Hi @jeradb, you could use a regex, not an eval command like the following: | rex "User Name: (?<User_Name>[^ \n]+)" you can test this regex at https://regex101.com/r/gJ0I26/1 But only one questio... See more...
Hi @jeradb, you could use a regex, not an eval command like the following: | rex "User Name: (?<User_Name>[^ \n]+)" you can test this regex at https://regex101.com/r/gJ0I26/1 But only one question: did you installed the Splunk_TA_Windows (https://splunkbase.splunk.com/app/742)? using this add-on you should already have this field extracted without using a custom regex. Ciao. Giuseppe
LogName=Application EventCode=1004 EventType=4 ComputerName=Test.local User=NOT_TRANSLATED Sid=S-1-5-21-2704069758-3089908202-2921546158-1104 SidType=0 SourceName=RoxioBurn Type=Information RecordNum... See more...
LogName=Application EventCode=1004 EventType=4 ComputerName=Test.local User=NOT_TRANSLATED Sid=S-1-5-21-2704069758-3089908202-2921546158-1104 SidType=0 SourceName=RoxioBurn Type=Information RecordNumber=16834 Keywords=Classic TaskCategory=Optical Disc OpCode=Info Message=Date: Wed Feb 28 14:22:59 2024 Computer Name: COM-HV01 User Name: Test\test.user Writing is completed on drive (E:). Project includes 0 folder(s) and 1 file(s). Volume Label: 2024-02-28 Volume SN: 0 Volume ID: \??\Volume{b282bf1c-3dde-11ed-b48e-806e6f6e6963} Type: Unknown Status Of Media: Appendable,Blank,Closed session Files: C:\ProgramData\Roxio Log Files\Test.test.user_20240228142142.txt SHA1: 7c347a6724dcd243d396f9bb5e560142f26b8aa4 File System: None Disc Number: 1 Encryption: Yes User Password: Yes Spanned Set: No Data Size On Disc Set: 511 Bytes Network Volume: No   How would I write an eval command to extract User Name: without domain, Status of Media, Data size on disc set, and files from the field Message?  
When trying to run the ML demos on my Macbook M2 running Splunk in a docker env I get the following error in the middle of the display: Error in 'fit' command: Failed to find Python for Scientific... See more...
When trying to run the ML demos on my Macbook M2 running Splunk in a docker env I get the following error in the middle of the display: Error in 'fit' command: Failed to find Python for Scientific Computing Add-on (Splunk_SA_Scientific_Python_linux_x86_64) After having installed both Linux x64 (yes it also runs fast in Rosetta 2) + the mac_Silicon one and restarted the server, the error still remains. Any help appreciated Thank you
Hi @Nawab , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Yes, I can help, but it's also in the manual. | multikv forceheader=1
I don't know which third-party software offers that counter, if it's even available at all.
this works fro me
@Nawab , Please try below : https://docs.splunk.com/Documentation/ES/7.3.0/Admin/CustomizeIR In the Splunk Enterprise Security app, select Configure. Select General and then select General Settings... See more...
@Nawab , Please try below : https://docs.splunk.com/Documentation/ES/7.3.0/Admin/CustomizeIR In the Splunk Enterprise Security app, select Configure. Select General and then select General Settings. Go to Enhanced Incident Review workflow panel. Select Turn off.
When I navigate to Settings > Tokens, I get this error message:   KVStore is not ready. Token auth system will not work.   Splunk logs shows this:   ERROR JsonWebToken [233289 TcpChannelThread]... See more...
When I navigate to Settings > Tokens, I get this error message:   KVStore is not ready. Token auth system will not work.   Splunk logs shows this:   ERROR JsonWebToken [233289 TcpChannelThread] - KVStore is not ready. Token auth system will not work. ERROR KVStoreConfigurationProvider [233052 KVStoreConfigurationThread] - Failed to start mongod on first attempt reason=KVStore service will not start because kvstore process terminated ERROR KVStoreBulletinBoardManager [233053 MongodLogThread] - KV Store changed status to failed. KVStore process terminated..   How can this be fixed?  
Hi, @yuanliu! Thanks for your reply and clue. I have no spare instance right now but I've exported contents of this KV Store time-based lookup into CSV file, reconfigured lookup definition to use t... See more...
Hi, @yuanliu! Thanks for your reply and clue. I have no spare instance right now but I've exported contents of this KV Store time-based lookup into CSV file, reconfigured lookup definition to use that CSV - and now it works. Looks like the problem is related only to KV Store time-based lookups. So this particular problem is solved but I'd like to know if such behavior is expected with KV Store lookups or it is some bug or my misconfiguration. My deployment is Splunk Enterprise 7.2.6 with MongoDB.
Hello,  I have a query that gathers all the data from API calls, P90/P95 and P99 time, along with capturing API response times in time buckets (<1s, 1 to 3 seconds, till >10s) along with Avg and Pea... See more...
Hello,  I have a query that gathers all the data from API calls, P90/P95 and P99 time, along with capturing API response times in time buckets (<1s, 1 to 3 seconds, till >10s) along with Avg and Peak TPS, no matter how much I try, I am unable to get these to report hourly over the course of last 24 hours. I am using multiple joins as well in the query.  index= X | eval eTime = responsetime | stats count(responsetime) as TotalCalls, p90(responsetime) as P90Time,p95(responsetime) as P95Time, p99(responsetime) as P99Time by fi | eval P90Time=round(P90Time,2) | eval P95Time=round(P95Time,2) | eval P90Time=round(P90Time,2) | table TotalCalls,P90Time,P95Time,P99Time | join type=left uri [search index=X | eval pTime = responsetime | eval TimeFrames = case(pTime<=1, "0-1s%", pTime>1 AND pTime<=3, "1-3s%", pTime>3, ">3s%") | stats count as CallVolume by platform, TimeFrames | eventstats sum(CallVolume) as Total | eval Percentage=(CallVolume/Total)*100 | eval Percentage=round(Percentage,2) | chart values(Percentage) over platform by TimeFrames | sort -TimeFrames] | join type=left uri [search index=X | eval resptime = responsetime | bucket _time span=1s | stats count as TPS by _time,fi | stats max(TPS) as PeakTPS, avg(TPS) as AvgTPS by fi | eval AvgTPS=round(AvgTPS,2) | fields PeakTPS, AvgTPS] My stats currently look like this: TotalCalls P90Time P95Time P99Time 0-1s% 1-3s% AvgTPS Platform PeakTPS 1565113 0.35 0.44 1.283 98.09 1.91 434.75 abc 937   I just need these stats every hour over the course of last X days. I only able to get certain columns worth of data, but the chart in the first join and the fields in the second join are somehow messing it up.     
Are you saying you get raw events that are fragments of an HTML document.  In any case, even though HTML is not the ideal data format for data structure, treating it as text still carries the usual r... See more...
Are you saying you get raw events that are fragments of an HTML document.  In any case, even though HTML is not the ideal data format for data structure, treating it as text still carries the usual risks, therefore I advise against.  Use spath to pretend that it is XML. You didn't give enough snippet to show how Environment is actually coded and I don't want to speculate (read tea leaf), so I am going to use Vendor as groupby in my example.  This is what I  would do:   | spath | eval Vendor = mvindex('tr.td', 0) | eval Issues = tonumber(mvindex('tr.td', 2)) | eval Running = tonumber(mvindex('tr.td', 1)) - Issues | stats sum(Running) as Running_count sum(Issues) as Issues_count by Vendor   Here is an emulation you can play with and compare with real data:   | makeresults | eval log = mvappend("</tr> <tr> <td >Apple</td> <td >59</td> <td >7</td>", "</tr> <tr> <td >Samsung</td> <td >61</td> <td >13</td>", "</tr> <tr> <td >Oppo</td> <td >34</td> <td >5</td>", "</tr> <tr> <td >Vivo</td> <td >38</td> <td >11</td>") | mvexpand log | rename log AS _raw ``` data emulation above ```   Output of this emulation is Vendor Running_count Issues_count Apple 52 7 Oppo 29 5 Samsung 48 13 Vivo 27 11
Well I somehow fixed my problem, by going to the "Bucket-Status" page and "summarized" the affected bucket in the repair tasks tab. can someone explain what that did? I still do not get it.
Hi @richgalloway  Can I know what kind of third party software should be available to collect the value and send it to Splunk? Because I need "% Committed Bytes In Use" should be present in Per... See more...
Hi @richgalloway  Can I know what kind of third party software should be available to collect the value and send it to Splunk? Because I need "% Committed Bytes In Use" should be present in Perfmon :Process stanza , means  this "% Committed Bytes In Use" should be present in that counter list. so how can get this added? Thanks
Well, no. While the addon might contain some faulty definitions, it won't prevent the environment from searching in general and cause licensing warning.
So after looking around in Splunk I found the bucket  Any Ideas what to do with that? Bucket-Status _internal~1260~8D19E36A-C3DF-465D-9B7E-908324F333E5 Aktion _internal does not meet: primac... See more...
So after looking around in Splunk I found the bucket  Any Ideas what to do with that? Bucket-Status _internal~1260~8D19E36A-C3DF-465D-9B7E-908324F333E5 Aktion _internal does not meet: primacy & rf 3 Tag(e) 20 Stunde(n) 10 Minute(n) Cannot replicate as bucket hasn't rolled yet.