All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This question has been asked and answered many times For example: Re: How to color the columns based on previous co... - Splunk Community which you could adapt for your usecase You could have found... See more...
This question has been asked and answered many times For example: Re: How to color the columns based on previous co... - Splunk Community which you could adapt for your usecase You could have found this with a simple search Search - Splunk Community
Hi @shakti, if you are in violation, you receive a message from your Splunk and you can enable an alert that sends yoo an email when this occurs or when it's near. You can find this search in your ... See more...
Hi @shakti, if you are in violation, you receive a message from your Splunk and you can enable an alert that sends yoo an email when this occurs or when it's near. You can find this search in your Monitoring Console alerts. Ciao. Giuseppe
Try NOT as the capitalise version is a recognised word (similarly for OR and AND)
Hi, I am looking to grab all windows events of successful NTLM logins without using Kerberos. Here is my query so far.     "eventcode=4776" "Error Code: 0x0" ntlm   I think this is working as of ... See more...
Hi, I am looking to grab all windows events of successful NTLM logins without using Kerberos. Here is my query so far.     "eventcode=4776" "Error Code: 0x0" ntlm   I think this is working as of now, however it brings results including the value of Kerberos, I tried using the value, Not "Kerberos" , however it completely broke my search result.   I am looking to grab only the value of "Account Name:" and "Source Network Address:" then export it to a csv file every week.    Is this something I can do with Splunk? If so any help would be appreciated. Thanks.
You would utilize the stats command to find an average of the  diff_seconds field using a by-field of search_name. Something like this (following the search I shared before) index=notable ... See more...
You would utilize the stats command to find an average of the  diff_seconds field using a by-field of search_name. Something like this (following the search I shared before) index=notable | eval event_epoch=if( NOT isnum(event_time), strptime(event_time, "%m/%d/%Y %H:%M:%S"), 'event_time' ), orig_epoch=if( NOT isnum(orig_time), strptime(orig_time, "%m/%d/%Y %H:%M:%S"), 'orig_time' ) | eval event_epoch_standardized=coalesce(event_epoch, orig_epoch), diff_seconds='_time'-'event_epoch_standardized' | fields + _time, search_name, event_time, diff_seconds | stats count as sample_size, min(diff_seconds) as min_diff_seconds, max(diff_seconds) as max_diff_seconds, avg(diff_seconds) as avg_diff_seconds by search_name | eval avg_diff=tostring(avg_diff_seconds, "duration")  
Verify the inputs.conf and outputs.conf files are the same in all three regions.  Make sure they all have the latest Splunk Cloud certificate. Confirm there are no firewalls blocking traffic between... See more...
Verify the inputs.conf and outputs.conf files are the same in all three regions.  Make sure they all have the latest Splunk Cloud certificate. Confirm there are no firewalls blocking traffic between the third forwarder and Splunk Cloud. When you looked at splunkd.log, were you looking on the forwarder or in the indexed logs.  It should be the former.
Hi @jovnice , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi i have stats table with following     
The props.conf settings are missing TIME_FORMAT.  Other settings may need to be changed, but we need to see the raw data (the CSV file before it gets to Splunk) to determine that.
Thank you. It works.
AFAIK, indexer queues are not configurable.  You can, however, use maxQueueSize in outputs.conf on the forwarders to set the size of the output queue.  That's the queue where packets are stored if th... See more...
AFAIK, indexer queues are not configurable.  You can, however, use maxQueueSize in outputs.conf on the forwarders to set the size of the output queue.  That's the queue where packets are stored if the destination becomes unavailable. In the case of HEC inputs, it's the responsibility of the client to retry any request that gets a non-200 (OK) response code ("server is busy" in this case).
<your index> [| inputlookup <your lookup> | table ClientName] "Certificate was successfully validated"
Hello Splunk members! I have a CSV Lookup file with 2 columns ClientName HWDetSystem BD-K-027EY     VMware I have an index with ASA Firewall log which I want to search and find events for ... See more...
Hello Splunk members! I have a CSV Lookup file with 2 columns ClientName HWDetSystem BD-K-027EY     VMware I have an index with ASA Firewall log which I want to search and find events for all the ClientNme in the CSV 234654252.234 %ASA-3-2352552: Certificate was successfully validated. serial number: 1123423SSDDG23442234234DSGSGSGGSSG8, subject name: CN=BD-K-027EY.bl.emea.something.com. Between the CSV lookup file and event the common is the ClientName and a portion of the subject name. If I look for successfully and provide a single client name i get the event I want, but I am struggling to look it up for all the clients and make it uniqe. At the end I just want a list of ClientName for which the even was logged. thanks  
You could construct your search so that each row has a field with the name of the recipients. Then set up the alert so that it triggers for every result. Then use the $row.field$ token as the recipie... See more...
You could construct your search so that each row has a field with the name of the recipients. Then set up the alert so that it triggers for every result. Then use the $row.field$ token as the recipient in the trigger action. Note that this will mean that the recipients will get multiple emails if their address appears in more than one row of the report.
The closest thing you have in Studio at the moment are markdown blocks but I suspect this will not give you what you need.
According to Splunk documentation, you can uses SAML with tokens: "Create authentication tokens to use the REST APIs. Tokens are available for both native Splunk authentication and external authenti... See more...
According to Splunk documentation, you can uses SAML with tokens: "Create authentication tokens to use the REST APIs. Tokens are available for both native Splunk authentication and external authentication through either the LDAP or SAML schemes. To learn more about setting up authentication with tokens, see Set up authentication with tokens  in the Securing Splunk Enterprise manual." There are some SAML side requirements such as (per token doc):  "Single Sign-On (SSO) schemes that use SAML. These schemes must either support Attribute Query Requests (AQR) or provide information through scripted authentication extensions." Hope this helps!
Our dashboards were made in HTML with custom js and now is considered a vulnerability and we have to rewrite all dashboards in dashboard studio. This is what we used to view the dashboard in a xml fi... See more...
Our dashboards were made in HTML with custom js and now is considered a vulnerability and we have to rewrite all dashboards in dashboard studio. This is what we used to view the dashboard in a xml file,  <view template="app-name:/templates/file.html"> <label>Name of the app</label> </view> Apparently the line element is always on top no matter the order in source code. For example in the photo attached, the PTS block with the red circle must be on top of the green, blue, orange lines(the example is from the old dashboard). As far as i understood, html is not supported in dashboard studio and i can't find another way to solve this problem.
A Splunk instance can connect to only one license manager. Perhaps Corporate can add your license to their LM and allocate that quota to your pool.
I also did a dbinspect on this index and searched for the bicketId. It gives below: bucketId aaaaaa~183~839799B0-6EAF-436C-B12A-2CDC010C1319 eventCount 1660559027 eventCount 0 guId B5D4AECD-273A-... See more...
I also did a dbinspect on this index and searched for the bicketId. It gives below: bucketId aaaaaa~183~839799B0-6EAF-436C-B12A-2CDC010C1319 eventCount 1660559027 eventCount 0 guId B5D4AECD-273A-4CB5-88B4-F6C5C75C3564 hostCount 0 id 183 index aaaaaa modTime 01/30/2024:15:24:57 path /opt/splunk/data/cold/aaaaaa/db_1660559027_1659954230_183_839799B0-6EAF-436C-B12A-2CDC010C1319 rawSize 0 sizeOnDiskMB 3.078125 sourceCount 0 sourceTypeCount 0 splunk_server server1.bez.nl startEpoch 1659954230 state cold tsidxState full I don't understand the fact that it says it is in cold. This index (as all on these servers) are migrated to Smartstore. so this path is wrong. Am i missing something? And the eventcount 0? rawsize 0? but also a startEpoch and endEpoch without events?
Which lines and circles are you referring to? Studio is still not as flexible as Classic on many respects. Why are you migrating?