All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The problem with moving to Studio for just one feature is that it doesn't support all the features that are available in Classic (yet). You need to weigh up the pros and cons of each and decide what ... See more...
The problem with moving to Studio for just one feature is that it doesn't support all the features that are available in Classic (yet). You need to weigh up the pros and cons of each and decide what costs you are willing to pay for your choice. Having said that, you may be able to do something in Classic with using the done handler for the inputs. Did you investigate that?
Hello  I am trying to add some logic/formatting to my list of failed authentications. Heres my search query. | tstats summariesonly=true count from datamodel="Authentication" WHERE Authenticatio... See more...
Hello  I am trying to add some logic/formatting to my list of failed authentications. Heres my search query. | tstats summariesonly=true count from datamodel="Authentication" WHERE Authentication.action="failure" AND Authentication.user="*" AND Authentication.src="*" AND Authentication.user!=*$ by Authentication.user | `drop_dm_object_name("Authentication")` | sort - count | head 10 I want to make it so that it counts how many consecutive days a user has been on this list, is that possible?
Hi, I agree. I moved to studio because someone recommended me:https://community.splunk.com/t5/Dashboards-Visualizations/How-to-improve-the-filter-input/m-p/668513#M54700 I have already converted ... See more...
Hi, I agree. I moved to studio because someone recommended me:https://community.splunk.com/t5/Dashboards-Visualizations/How-to-improve-the-filter-input/m-p/668513#M54700 I have already converted the dashboard and created a much more impressive visualization so i prefer to stay with the studio version but i have a problem because tabs are not supported. Do you know about a workaround solution?  thanks
@gcusello , Hi   I want to blacklist C:\Program Files\SplunkUniversalForwarder\bin\splunkd.exe of creatorprocessname would it block the newprocessname of C:\Windows\System32\cmd.exe  as well ? ... See more...
@gcusello , Hi   I want to blacklist C:\Program Files\SplunkUniversalForwarder\bin\splunkd.exe of creatorprocessname would it block the newprocessname of C:\Windows\System32\cmd.exe  as well ?   Thanks
@ITWhisperer   it seems to be working now , thanks a lot
MessageHeader\.(?<POIID_Error>.+)
Last week v9.1.2 has been released. (6 nov 2023, I think it was) After installing this version on my test instance (v9.1.1) everytyhing seems to work again including sendemail - no issues found. G... See more...
Last week v9.1.2 has been released. (6 nov 2023, I think it was) After installing this version on my test instance (v9.1.1) everytyhing seems to work again including sendemail - no issues found. Great!  After installing this version several days later on our production instance (9.1.0.2) also sendemail was working fine again. Great!  NB. after that I was also able to fix all other issues on our production instance as mentioned before in this post, like: kvstore, secure gateway etc Great!  Many thanks to support- and development team. I am now happy splunking again!    I hereby close this post.
Hi All, Here is my how my event looks like -   20/11/2023 12:47:05 (01) >> AdyenProxy::AdyenPaymentResponse::ProcessPaymentFailure::Additional response -> Message : NotAllowed ; Refusal Reason : m... See more...
Hi All, Here is my how my event looks like -   20/11/2023 12:47:05 (01) >> AdyenProxy::AdyenPaymentResponse::ProcessPaymentFailure::Additional response -> Message : NotAllowed ; Refusal Reason : message=MessageHeader.POIID: NotAllowed Value: P400Plus-805598742, Reason: my POIID is P400Plus-805598450   I am trying to extract the part "POIID: NotAllowed Value: P400Plus-805598742, Reason: my POIID is P400Plus-805598450" I am using this regex - | rex field=_raw "MessageHeader.+(?<POIID_Error>)-*" But the field vale POIID_Error seems to be blank after running the query. Attaching the ss for reference. Little suggestion to fix this is appreciated.
Thank you @PickleRick for your inputs. I was able to build my solution using it as below: - index=custom_index earliest=-4w@w latest=@d |search [ |inputlookup append=true table1.csv |where relative... See more...
Thank you @PickleRick for your inputs. I was able to build my solution using it as below: - index=custom_index earliest=-4w@w latest=@d |search [ |inputlookup append=true table1.csv |where relative_time(now(),"-1d@d") |dedup fieldA |where fieldB<fieldC |fields + fieldA |fields - _time ] |bin span=1d _time |stats sum(xxx) AS xxx BY fieldA _time |eventstats median(xxx) AS median_xxx BY fieldA
Hi @gcusello I have added below code but the image is not loading. I have given dummy link below, but my actual private link is working fine <html> <centre> <img style="padding-top:60px" height="9... See more...
Hi @gcusello I have added below code but the image is not loading. I have given dummy link below, but my actual private link is working fine <html> <centre> <img style="padding-top:60px" height="92" href="https://sharepoint.com/:i:/r/sites/Shared%20Documents/Pictures/Untitled%20picture.png?csf=1&amp;web=..." width="272" alt="Terraform "></img> </centre> </html>  
Hi. We have an indexer cluster of 4 nodes with a little over 100 hundred indexes. We've recently taken a look and the cluster manager fixup tasks and noticed a large number of fixup tasks pending ... See more...
Hi. We have an indexer cluster of 4 nodes with a little over 100 hundred indexes. We've recently taken a look and the cluster manager fixup tasks and noticed a large number of fixup tasks pending over 100 days (24000) for a select few of the indexes. The majority of these tasks are for the following reasons. Received shutdown notification from peer and Cannot replicate as bucket hasn't rolled yet. For some reason these few indexes are quite low volume but have a large number of buckets.  ideally i would like to clear these tasks. If we aren't precious about the data would a suitable solution be to remove the indexes from the cluster configuration, manually delete the data folders for the indexes and re enable the indexes? Or could we reduce the data size on the index/number of buckets on the index to clear out these tasks? example of one of the index configurations # staging: 0.01 GB/day, 91 days hot, 304 days cold [staging] homePath = /splunkhot/staging/db coldPath = /splunkcold/staging/colddb thawedPath = /splunkcold/staging/thaweddb maxDataSize = 200 frozenTimePeriodInSecs = 34128000 maxHotBuckets = 1 maxWarmDBCount = 300 homePath.maxDataSizeMB = 400 coldPath.maxDataSizeMB = 1000 maxTotalDataSizeMB = 1400 Thanks for any advice.
Perhaps it would be better for you to show what it is that you do want?
Hello. Upgrading from Version 7 to Version 8.2.12, we noticed that the "ui-prefs.conf" is not working anymore. Inside the /etc/user/app/local/ui-prefs.conf we have every user customization, now th... See more...
Hello. Upgrading from Version 7 to Version 8.2.12, we noticed that the "ui-prefs.conf" is not working anymore. Inside the /etc/user/app/local/ui-prefs.conf we have every user customization, now they are totally skipped. Also the admin, can't change his default view type (ex. "fast/smart/verbose"). Is there a reason? And is there a way to restore this feature? Thanks.
Your second approach is what we are trying to do now, and it has worked very well for the most part, but we've run into some issues with file precedence —when using the [default] stanza. I guess we'... See more...
Your second approach is what we are trying to do now, and it has worked very well for the most part, but we've run into some issues with file precedence —when using the [default] stanza. I guess we'll keep doing this, since I think, as you do, that it is more manageable to have the small pieces of configuration in their own apps. BTW, we are naming this apps starting with numbers following the example of the 100_app that contains the tls credentials to forward traffic to the cloud. The issue with the Cloud vs. Enterprise is that to deploy to the cloud you need to pass the inspection proccess, and it fails if you have inputs.conf, which for the forwarders you always want. So that's another good reason to have them in separate pieces.
Hi @gmbdrj , it's realli diffi coult to answer to your question in few words. A>nyway, installi the MItre Att@ck app, you can start from a mapping of your Searches with this framework. Then you ca... See more...
Hi @gmbdrj , it's realli diffi coult to answer to your question in few words. A>nyway, installi the MItre Att@ck app, you can start from a mapping of your Searches with this framework. Then you can use the Enterprise Security (if you have) and/or the Splunk Security Essentials App to be guided in Use Cases implementation. Anyway, remember that the starting poins is always data: you have to analyze the data you have to understand which Use Cases you can enable. Ciao. Giuseppe 
It is more likely that your performance issue is caused by the sort+streamstats rather than the lookup Here is an example that does not use sort or streamstats - it may or may not work in your data,... See more...
It is more likely that your performance issue is caused by the sort+streamstats rather than the lookup Here is an example that does not use sort or streamstats - it may or may not work in your data, but the principle is to use stats. You can run this example and it will give you your results.  The piece you would want is shown by the comment before the fields statement.   | makeresults format=csv data="_time,DEV_ID,case_name,case_action 01:00,111,ping111.py,start 01:20,111,ping111.py,end 02:00,222,ping222.py,start 02:30,222,ping222.py,end 02:40,111,ping222.py,start 03:00,111,ping222.py,end" | eval _time=strptime("2023-11-21 "._time.":00", "%F %T") | append [ | makeresults format=csv data="_time,LOG_ID,Message_Name 01:10,01,event_a 02:50,02,event_a" | eval _time=strptime("2023-11-21 "._time.":00", "%F %T") | eval DEV_ID=111 ] ``` So use your first two lines of your search and then the following``` | fields _time DEV_ID case_name case_action LOG_ID Message_Name | eval t=if(isnull(LOG_ID),printf("%d##%s##%s", _time, case_action, case_name), null()) | eval lt=if(isnull(LOG_ID),null,printf("%d##%s##%s", _time, LOG_ID, Message_Name)) | fields - LOG_ID Message_Name case_* | stats values(*) as * by DEV_ID | where isnotnull(lt) | mvexpand lt | eval s=split(lt, "##") | eval _time=mvindex(s, 0), LOG_ID=mvindex(s, 1), Message_Name=mvindex(s,2) | rex field=t max_match=0 "(?<report_time>\d+)##(?<case_action>[^#]*)##(?<case_name>.*)" | eval min_ix=-1 | eval c = 0 | foreach mode=multivalue report_time [ eval min_ix=if(_time > '<<ITEM>>', c, min_ix), c=c+1 ] | eval case_name=if(min_ix>=0, mvindex(case_name, min_ix), "unknown") | eval case_action=if(min_ix>=0, mvindex(case_action, min_ix), "unknown") | fields - s lt t c min_ix report_time | table _time Message_Name LOG_ID DEV_ID case_name    
Hi @MayurMangoli , did you configured your Indexers to receive encrypted logs? It seems that you forgot to add the correct configuration in the outputs.conf that you deployed to your UFs. For more... See more...
Hi @MayurMangoli , did you configured your Indexers to receive encrypted logs? It seems that you forgot to add the correct configuration in the outputs.conf that you deployed to your UFs. For more infos see at https://docs.splunk.com/Documentation/Splunk/8.2.12/Security/Aboutsecuringdatafromforwarders  Ciao. Giuseppe
Hi @Sirius_27 , you can associate to your account another app at startup instead of Launcher. Or you can define an Home Page dashboard to display after login. To setup another App as default you h... See more...
Hi @Sirius_27 , you can associate to your account another app at startup instead of Launcher. Or you can define an Home Page dashboard to display after login. To setup another App as default you hav e to go in your Account and choose preferences. To setup a dashboard as default home page, you have to go in your dashboard and, after click on "Edit", setup the option "Set up Home Dashboard". Ciao. Giuseppe 
Can anyone help on my request.
Hi @Splunkerninja , in Splunk Cloud, you don't upload images but you use external on line images. You have to follow the above procedure to avoid to approve each time the access to an external cont... See more...
Hi @Splunkerninja , in Splunk Cloud, you don't upload images but you use external on line images. You have to follow the above procedure to avoid to approve each time the access to an external content. Ciao. Giuseppe