All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Last week v9.1.2 has been released. (6 nov 2023, I think it was) After installing this version on my test instance (v9.1.1) everytyhing seems to work again including sendemail - no issues found. G... See more...
Last week v9.1.2 has been released. (6 nov 2023, I think it was) After installing this version on my test instance (v9.1.1) everytyhing seems to work again including sendemail - no issues found. Great!  After installing this version several days later on our production instance (9.1.0.2) also sendemail was working fine again. Great!  NB. after that I was also able to fix all other issues on our production instance as mentioned before in this post, like: kvstore, secure gateway etc Great!  Many thanks to support- and development team. I am now happy splunking again!    I hereby close this post.
Hi All, Here is my how my event looks like -   20/11/2023 12:47:05 (01) >> AdyenProxy::AdyenPaymentResponse::ProcessPaymentFailure::Additional response -> Message : NotAllowed ; Refusal Reason : m... See more...
Hi All, Here is my how my event looks like -   20/11/2023 12:47:05 (01) >> AdyenProxy::AdyenPaymentResponse::ProcessPaymentFailure::Additional response -> Message : NotAllowed ; Refusal Reason : message=MessageHeader.POIID: NotAllowed Value: P400Plus-805598742, Reason: my POIID is P400Plus-805598450   I am trying to extract the part "POIID: NotAllowed Value: P400Plus-805598742, Reason: my POIID is P400Plus-805598450" I am using this regex - | rex field=_raw "MessageHeader.+(?<POIID_Error>)-*" But the field vale POIID_Error seems to be blank after running the query. Attaching the ss for reference. Little suggestion to fix this is appreciated.
Thank you @PickleRick for your inputs. I was able to build my solution using it as below: - index=custom_index earliest=-4w@w latest=@d |search [ |inputlookup append=true table1.csv |where relative... See more...
Thank you @PickleRick for your inputs. I was able to build my solution using it as below: - index=custom_index earliest=-4w@w latest=@d |search [ |inputlookup append=true table1.csv |where relative_time(now(),"-1d@d") |dedup fieldA |where fieldB<fieldC |fields + fieldA |fields - _time ] |bin span=1d _time |stats sum(xxx) AS xxx BY fieldA _time |eventstats median(xxx) AS median_xxx BY fieldA
Hi @gcusello I have added below code but the image is not loading. I have given dummy link below, but my actual private link is working fine <html> <centre> <img style="padding-top:60px" height="9... See more...
Hi @gcusello I have added below code but the image is not loading. I have given dummy link below, but my actual private link is working fine <html> <centre> <img style="padding-top:60px" height="92" href="https://sharepoint.com/:i:/r/sites/Shared%20Documents/Pictures/Untitled%20picture.png?csf=1&amp;web=..." width="272" alt="Terraform "></img> </centre> </html>  
Hi. We have an indexer cluster of 4 nodes with a little over 100 hundred indexes. We've recently taken a look and the cluster manager fixup tasks and noticed a large number of fixup tasks pending ... See more...
Hi. We have an indexer cluster of 4 nodes with a little over 100 hundred indexes. We've recently taken a look and the cluster manager fixup tasks and noticed a large number of fixup tasks pending over 100 days (24000) for a select few of the indexes. The majority of these tasks are for the following reasons. Received shutdown notification from peer and Cannot replicate as bucket hasn't rolled yet. For some reason these few indexes are quite low volume but have a large number of buckets.  ideally i would like to clear these tasks. If we aren't precious about the data would a suitable solution be to remove the indexes from the cluster configuration, manually delete the data folders for the indexes and re enable the indexes? Or could we reduce the data size on the index/number of buckets on the index to clear out these tasks? example of one of the index configurations # staging: 0.01 GB/day, 91 days hot, 304 days cold [staging] homePath = /splunkhot/staging/db coldPath = /splunkcold/staging/colddb thawedPath = /splunkcold/staging/thaweddb maxDataSize = 200 frozenTimePeriodInSecs = 34128000 maxHotBuckets = 1 maxWarmDBCount = 300 homePath.maxDataSizeMB = 400 coldPath.maxDataSizeMB = 1000 maxTotalDataSizeMB = 1400 Thanks for any advice.
Perhaps it would be better for you to show what it is that you do want?
Hello. Upgrading from Version 7 to Version 8.2.12, we noticed that the "ui-prefs.conf" is not working anymore. Inside the /etc/user/app/local/ui-prefs.conf we have every user customization, now th... See more...
Hello. Upgrading from Version 7 to Version 8.2.12, we noticed that the "ui-prefs.conf" is not working anymore. Inside the /etc/user/app/local/ui-prefs.conf we have every user customization, now they are totally skipped. Also the admin, can't change his default view type (ex. "fast/smart/verbose"). Is there a reason? And is there a way to restore this feature? Thanks.
Your second approach is what we are trying to do now, and it has worked very well for the most part, but we've run into some issues with file precedence —when using the [default] stanza. I guess we'... See more...
Your second approach is what we are trying to do now, and it has worked very well for the most part, but we've run into some issues with file precedence —when using the [default] stanza. I guess we'll keep doing this, since I think, as you do, that it is more manageable to have the small pieces of configuration in their own apps. BTW, we are naming this apps starting with numbers following the example of the 100_app that contains the tls credentials to forward traffic to the cloud. The issue with the Cloud vs. Enterprise is that to deploy to the cloud you need to pass the inspection proccess, and it fails if you have inputs.conf, which for the forwarders you always want. So that's another good reason to have them in separate pieces.
Hi @gmbdrj , it's realli diffi coult to answer to your question in few words. A>nyway, installi the MItre Att@ck app, you can start from a mapping of your Searches with this framework. Then you ca... See more...
Hi @gmbdrj , it's realli diffi coult to answer to your question in few words. A>nyway, installi the MItre Att@ck app, you can start from a mapping of your Searches with this framework. Then you can use the Enterprise Security (if you have) and/or the Splunk Security Essentials App to be guided in Use Cases implementation. Anyway, remember that the starting poins is always data: you have to analyze the data you have to understand which Use Cases you can enable. Ciao. Giuseppe 
It is more likely that your performance issue is caused by the sort+streamstats rather than the lookup Here is an example that does not use sort or streamstats - it may or may not work in your data,... See more...
It is more likely that your performance issue is caused by the sort+streamstats rather than the lookup Here is an example that does not use sort or streamstats - it may or may not work in your data, but the principle is to use stats. You can run this example and it will give you your results.  The piece you would want is shown by the comment before the fields statement.   | makeresults format=csv data="_time,DEV_ID,case_name,case_action 01:00,111,ping111.py,start 01:20,111,ping111.py,end 02:00,222,ping222.py,start 02:30,222,ping222.py,end 02:40,111,ping222.py,start 03:00,111,ping222.py,end" | eval _time=strptime("2023-11-21 "._time.":00", "%F %T") | append [ | makeresults format=csv data="_time,LOG_ID,Message_Name 01:10,01,event_a 02:50,02,event_a" | eval _time=strptime("2023-11-21 "._time.":00", "%F %T") | eval DEV_ID=111 ] ``` So use your first two lines of your search and then the following``` | fields _time DEV_ID case_name case_action LOG_ID Message_Name | eval t=if(isnull(LOG_ID),printf("%d##%s##%s", _time, case_action, case_name), null()) | eval lt=if(isnull(LOG_ID),null,printf("%d##%s##%s", _time, LOG_ID, Message_Name)) | fields - LOG_ID Message_Name case_* | stats values(*) as * by DEV_ID | where isnotnull(lt) | mvexpand lt | eval s=split(lt, "##") | eval _time=mvindex(s, 0), LOG_ID=mvindex(s, 1), Message_Name=mvindex(s,2) | rex field=t max_match=0 "(?<report_time>\d+)##(?<case_action>[^#]*)##(?<case_name>.*)" | eval min_ix=-1 | eval c = 0 | foreach mode=multivalue report_time [ eval min_ix=if(_time > '<<ITEM>>', c, min_ix), c=c+1 ] | eval case_name=if(min_ix>=0, mvindex(case_name, min_ix), "unknown") | eval case_action=if(min_ix>=0, mvindex(case_action, min_ix), "unknown") | fields - s lt t c min_ix report_time | table _time Message_Name LOG_ID DEV_ID case_name    
Hi @MayurMangoli , did you configured your Indexers to receive encrypted logs? It seems that you forgot to add the correct configuration in the outputs.conf that you deployed to your UFs. For more... See more...
Hi @MayurMangoli , did you configured your Indexers to receive encrypted logs? It seems that you forgot to add the correct configuration in the outputs.conf that you deployed to your UFs. For more infos see at https://docs.splunk.com/Documentation/Splunk/8.2.12/Security/Aboutsecuringdatafromforwarders  Ciao. Giuseppe
Hi @Sirius_27 , you can associate to your account another app at startup instead of Launcher. Or you can define an Home Page dashboard to display after login. To setup another App as default you h... See more...
Hi @Sirius_27 , you can associate to your account another app at startup instead of Launcher. Or you can define an Home Page dashboard to display after login. To setup another App as default you hav e to go in your Account and choose preferences. To setup a dashboard as default home page, you have to go in your dashboard and, after click on "Edit", setup the option "Set up Home Dashboard". Ciao. Giuseppe 
Can anyone help on my request.
Hi @Splunkerninja , in Splunk Cloud, you don't upload images but you use external on line images. You have to follow the above procedure to avoid to approve each time the access to an external cont... See more...
Hi @Splunkerninja , in Splunk Cloud, you don't upload images but you use external on line images. You have to follow the above procedure to avoid to approve each time the access to an external content. Ciao. Giuseppe
If Splunk already extracted two Account Names, wouldn't it be simpler to call the first value and second value different names? index="wineventlog" EventCode=4726 | eval SubjectAccountName = mvindex... See more...
If Splunk already extracted two Account Names, wouldn't it be simpler to call the first value and second value different names? index="wineventlog" EventCode=4726 | eval SubjectAccountName = mvindex('Account Name', 0) | eval TargetAccountName = mvindex('Account Name', 1) Also, I remember that some says Windows events can come in as JSON.  If you have structured data, you don't need to worry about these at all.
Just note: Often times it is better to describe your use case than trying to "fix" SPL.  Are you sure it is lookup that slows the search, not sort?  Sorting large amount of data is expensive in many ... See more...
Just note: Often times it is better to describe your use case than trying to "fix" SPL.  Are you sure it is lookup that slows the search, not sort?  Sorting large amount of data is expensive in many ways while lookup is a very efficient command. If you must try to not lookup in report_b, you can append after lookup. (index=A Message_Name="event_a") | lookup table_A.csv LOG_ID OUTPUT DEV_ID append [search index=A report="report_b"] | sort 0 + _time | streamstats current=false last(case_name) as last_case_name, , last(case_action) as last_case_action by DEV_ID | eval case_name=if(isnull(case_name) AND last_case_action="start",last_case_name,case_name) | where isnotnull(Message_Name) | table _time Message_Name LOG_ID DEV_ID case_name Not sure how much this can speed search up, however.
Instead of getting search box on after splunk login, I want to make it look like some kind of welcome page like we get in splunk_secure_gateway app. I got the part of making navigation view but not a... See more...
Instead of getting search box on after splunk login, I want to make it look like some kind of welcome page like we get in splunk_secure_gateway app. I got the part of making navigation view but not able to modify the launch page to some welcome page of website. Please advise what changes I can make to some config file or xml file so i can get whatever page look i want.
@inventsekar , I believe this is the cause of issue, from the below snapshot  Creator_Process_Name                                  New_Process_Name C:\Program Files (x86)\Tanium\Tanium Client... See more...
@inventsekar , I believe this is the cause of issue, from the below snapshot  Creator_Process_Name                                  New_Process_Name C:\Program Files (x86)\Tanium\Tanium Client\TaniumClient.exe C:\Windows\System32\cmd.exe   When I excluded the creator processname tanium its newprocess name cmd.exe is also excluded.
I gave dummy URL here, but i do have one private URL where it is working fine
Dear All, I have one index and I use this index to store messages and summary report as well. In report="report_b", it stores the running case name and the used device id(DEV_ID) in timestamp _time... See more...
Dear All, I have one index and I use this index to store messages and summary report as well. In report="report_b", it stores the running case name and the used device id(DEV_ID) in timestamp _time. ex. _time DEV_ID case_name case_action 01:00 111 ping111.py start 01:20 111 ping111.py end 02:00 222 ping222.py start 02:30 222 ping222.py end 02:40 111 ping222.py start 03:00 111 ping222.py end   For Message_Name="event_a",  it is stored in index=A as below: _time LOG_ID Message_Name 01:10 01 event_a 02:50 02 event_a I would like to associate the case that is running when the event_a is sent. So I use the code below: Firstly, to find out the device id(DEV_ID) associated with this log(LOG_ID)  Secondly, to associate event_a and case_name by DEV_ID Finally, list those event_a only.   (index=A Message_Name="event_a") OR (index=A report="report_b") | lookup table_A.csv LOG_ID OUTPUT DEV_ID | sort 0 + _time | streamstats current=false last(case_name) as last_case_name, , last(case_action) as last_case_action by DEV_ID | eval case_name=if(isnull(case_name) AND last_case_action="start",last_case_name,case_name) | where isnotnull(Message_Name) | table _time Message_Name LOG_ID DEV_ID case_name     The output would be: _time Message_Name LOG_ID DEV_ID case_name 01:10 event_a 01 111 ping111.py 02:50 event_a 02 111 ping222.py   The code works fine but the amount of data is huge so the lookup command takes a very long time.  Furthermore, actually, it is no need to apply lookup command for report="report_b". (index=A Message_Name="event_a") : 150000 records in 24 hour (index=A report="report_b") : 700000 records in 24 hour Is there any way to rewrite the code to make lookup only apply on events belongs to (index=A Message_Name="event_a") ? try to use subsearch, append, appendpipe to restrict find associated DEV_ID first but not working.   Thank you so much.