All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

They seem to correspond to different Carbon Black products: https://splunkbase.splunk.com/app/5775 - Carbon Black App Control (formerly Bit9) https://splunkbase.splunk.com/app/5774 - Carbon Black d... See more...
They seem to correspond to different Carbon Black products: https://splunkbase.splunk.com/app/5775 - Carbon Black App Control (formerly Bit9) https://splunkbase.splunk.com/app/5774 - Carbon Black defense https://splunkbase.splunk.com/app/5947 - Carbon Black Response https://splunkbase.splunk.com/app/6732 - VMware Carbon Black Cloud Which Carbon Black product are you using? If you have a contact with your Carbon Black license then perhaps you can ask them which is the most appropriate SOAR connector for your Carbon Black products. Or you could try your API keys on each product and see which one succeeds in its actions.
Thanx. I will create support case for this. Do you have old case id on your hands?
Hello, I have a WSUS server that is using the Windows Internal Database (WID). I would like to ingest WSUS service logs into Splunk, store them, and then parse them for further analysis. Could someo... See more...
Hello, I have a WSUS server that is using the Windows Internal Database (WID). I would like to ingest WSUS service logs into Splunk, store them, and then parse them for further analysis. Could someone guide me on the best approach to achieve this? Specifically: What is the best way to configure Splunk to collect logs from the WSUS service (and database if necessary)? Are there any best practices or recommended add-ons for parsing and indexing WSUS logs in Splunk? Thanks in advance for your help!
Hi @whipstash , add to the stats command, using the values option9 all the fields you need from both the searches: index=INDEX sourcetype=sourcetypeA | rex field=eventID "\w{0,30}+.(?<sessionID>\d+... See more...
Hi @whipstash , add to the stats command, using the values option9 all the fields you need from both the searches: index=INDEX sourcetype=sourcetypeA | rex field=eventID "\w{0,30}+.(?<sessionID>\d+)" | do some filter on infoIWant fields here | append [ search index=INDEX sourcetype=sourcetypeB | stats count AS eventcount earliest(_time) AS earliest latest(_time) AS latest BY sessionID | eval duration=latest-earliest | where eventcount=2 | fields sessionID duration field3 field4 ] | stats values(eventID) AS eventID values(duration) AS duration values(field1) AS field1 values(field2) AS field2 values(field3) AS field3 values(field4) AS field4 values(count) AS count BY sessionID Ciao. Giuseppe  
Hi @BB2 , only one question: why? if the issue is the limit of 50,000 chars, you can only increase the TRUNCATE limit. There's no utility (even if it's possible but not!) to trucate an event on fo... See more...
Hi @BB2 , only one question: why? if the issue is the limit of 50,000 chars, you can only increase the TRUNCATE limit. There's no utility (even if it's possible but not!) to trucate an event on forwarders and then reassemble it  on Indexers because events are compressed and stored in packets and sent from Forwarders to Indexers with no relation with the lenght of the event. So I ask you again why? the only action that you must do is increasing the lenght of the events aging on the TRUNCATE parameters. Ciao. Giuseppe
Hi @alex12  As documented here  https://docs.splunk.com/Documentation/Forwarder/9.3.1/Forwarder/Installleastprivileged the CAP_DAC_READ_SEARCH will work only with UF (not with HF)   the HF insta... See more...
Hi @alex12  As documented here  https://docs.splunk.com/Documentation/Forwarder/9.3.1/Forwarder/Installleastprivileged the CAP_DAC_READ_SEARCH will work only with UF (not with HF)   the HF installation method (regular Splunk enterprise installation) https://docs.splunk.com/Documentation/Splunk/9.3.1/Installation/InstallonLinux  
Hi @DATT , pls check this one: | makeresults | eval latestEpoch=$token_epoch$ + 604800 | index=someIndex earliest=$token_epoch$ latest=latestEpoch  
 helper.get_arg(“interval”) is not working with me.   I used helper.get_input_stanza() to retrieve the stanza information as a dict. for the dict you will find the interval value.   Thanks, Awni
I have a working dashboard that displays a number of metrics and KPIs for the previous week.  Today, I was asked to expand that dashboard to include a dropdown of all previous weeks over the last yea... See more...
I have a working dashboard that displays a number of metrics and KPIs for the previous week.  Today, I was asked to expand that dashboard to include a dropdown of all previous weeks over the last year   Using this query I was able to fill in my dashboard dropdown pretty easily | makeresults | eval START_EPOCH = relative_time(_time,"-1y@w1") | eval END_EPOCH = START_EPOCH + (60 * 60 * 24 * 358) | eval EPOCH_RANGE = mvrange(START_EPOCH, END_EPOCH, 86400 * 7) | mvexpand EPOCH_RANGE | eval END_EPOCH = EPOCH_RANGE + (86400 * 7) | eval START_DATE_FRIENDLY = strftime(EPOCH_RANGE, "%m/%d/%Y") | eval END_DATE_FRIENDLY = strftime(END_EPOCH, "%m/%d/%Y") | eval DATE_RANGE_FRIENDLY = START_DATE_FRIENDLY + " - " + END_DATE_FRIENDLY | table DATE_RANGE_FRIENDLY, EPOCH_RANGE | reverse Using this I get a dropdown with values such as  10/07/2024 - 10/14/2024 09/30/2024 - 10/07/2024   And so on, going back a year. Adding it to my search as a token has been more challenging though. Here's what I'm trying to do: index=someIndex earliest=$token_epoch$ latest=$token_epoch$+604800  Doing this I get "Invalid latest_time: latest_time must be after earliest_time."   I've seen some answers around here that involve running the search then using WHERE to apply earliest and latest.  I'd like to avoid that because the number of records that would have to pulled before I could filter on earliest and latest is in the many millions.  I've also considered using the timepicker but my concern there is the users who use this dashboard will pick the wrong dates.  I'd like to limit that by hardcoding the first and last days of the search via the dropdown. Is there a way to accomplish relative earliest and latest dates/times like this?
Hi @darkins , It is actually simple, as long as you are comfortable with regex syntax. It will be like this: | eval condition=case(match(_raw, "thisword"), "first_condition", match(_raw, "thi... See more...
Hi @darkins , It is actually simple, as long as you are comfortable with regex syntax. It will be like this: | eval condition=case(match(_raw, "thisword"), "first_condition", match(_raw, "thisotherword"), "second_condition", 1=1,"default_condition") | rex field=_raw "<rex_pattern>" if condition=="first_condition" | rex field=_raw "<rex_pattern>" if condition=="second_condition" | rex field=_raw "<rex_pattern>" if condition=="default_condition" Give it a try and let me know how it goes.
like in the subject, i am looking at events with different fields and delimeters i want to say if the event contains thisword then rex blah blah blah elseif the event contains thisotherword then rex... See more...
like in the subject, i am looking at events with different fields and delimeters i want to say if the event contains thisword then rex blah blah blah elseif the event contains thisotherword then rex blah blah blah i suspect this is simple but thought to ask
I think its a permission issue, Google Workspace user should have a “Organization Administrator” role. That’s the only requirement for the account. you account might be read only?
Hello @isoutamo missing double quotes parsing failing? looks like a bug to me. We had an old similar type bug sometime back on Splunk version6 .
Hello @yuanliu . I noticed an issue when I cloned my classic dashboards to Studio. Is this happening with brand new dashboards created in Studio? My 9.2.2 (Cloud) version has been verified to have th... See more...
Hello @yuanliu . I noticed an issue when I cloned my classic dashboards to Studio. Is this happening with brand new dashboards created in Studio? My 9.2.2 (Cloud) version has been verified to have the same issue when cloning from Classic to Studio. However, there are no issues with the original Studio dashboards.  
I have many Dashboard Studio dashboards by now.  Those created earlier function normally.  But starting a few days ago, new dashboard created cannot use "Open in Search" function any more; the magnif... See more...
I have many Dashboard Studio dashboards by now.  Those created earlier function normally.  But starting a few days ago, new dashboard created cannot use "Open in Search" function any more; the magnifying glass icon is greyed out.  "Show Open In Search Button" is checked.  Any insight?  My servers is 9.2.2 and there has been no change on server side.
I believe this feature is for UF only, code changes are made only for UF.
I'm having similar query but when using the below case .. onlt <=500 and >=1500 are getting counted in stats  duration =6000's are also getting counted in >1500ms case not sure why index=* | r... See more...
I'm having similar query but when using the below case .. onlt <=500 and >=1500 are getting counted in stats  duration =6000's are also getting counted in >1500ms case not sure why index=* | rex "TxDurationInMillis=(?<TxDurationInMillis>\d+)" | eval ResponseTime = tonumber(TxDurationInMillis) | eval ResponseTimeCase=case( ResponseTime <= 500, "<=500ms", ResponseTime > 500 AND ResponseTime <= 1000, ">500ms and <=1000ms", ResponseTime > 1000 AND ResponseTime <= 1400, ">1000ms and <=1400ms", ResponseTime > 1400 AND ResponseTime < 1500, ">1400ms and <1500ms", ResponseTime >= 1500, ">=1500ms" ) | table TxDurationInMillis ResponseTime ResponseTimeCase
You can apply EVENT BREAKER  settings on your props.conf.  Go to your app/local directory on your Deployment server. Create or edit props.conf file.  Update the EVENT_BREAKER with the appropr... See more...
You can apply EVENT BREAKER  settings on your props.conf.  Go to your app/local directory on your Deployment server. Create or edit props.conf file.  Update the EVENT_BREAKER with the appropriate regex pattern for your source. Typically, this is the same as your LINE_BREAKER regex. Reload the serverclass app on the Deployment server. Verify that the updated props.conf is successfully deployed to the Universal Forwarder. That should complete the setup.     Refer: https://community.splunk.com/t5/Getting-Data-In/How-to-apply-EVENT-BREAKER-on-UF-for-better-data-distribution/m-p/614423 Hope this helps.
Hi @Gregory.Burkhead, Have you had a chance to check out the reply from @Mario.Morelli? We're you able to find a solution that you could share or do you still need help?
Hi @Husnain.Ashfaq, Were you able to check out and try what was suggested by @Mario.Morelli?