All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @BB2 , only one question: why? if the issue is the limit of 50,000 chars, you can only increase the TRUNCATE limit. There's no utility (even if it's possible but not!) to trucate an event on fo... See more...
Hi @BB2 , only one question: why? if the issue is the limit of 50,000 chars, you can only increase the TRUNCATE limit. There's no utility (even if it's possible but not!) to trucate an event on forwarders and then reassemble it  on Indexers because events are compressed and stored in packets and sent from Forwarders to Indexers with no relation with the lenght of the event. So I ask you again why? the only action that you must do is increasing the lenght of the events aging on the TRUNCATE parameters. Ciao. Giuseppe
Hi @alex12  As documented here  https://docs.splunk.com/Documentation/Forwarder/9.3.1/Forwarder/Installleastprivileged the CAP_DAC_READ_SEARCH will work only with UF (not with HF)   the HF insta... See more...
Hi @alex12  As documented here  https://docs.splunk.com/Documentation/Forwarder/9.3.1/Forwarder/Installleastprivileged the CAP_DAC_READ_SEARCH will work only with UF (not with HF)   the HF installation method (regular Splunk enterprise installation) https://docs.splunk.com/Documentation/Splunk/9.3.1/Installation/InstallonLinux  
Hi @DATT , pls check this one: | makeresults | eval latestEpoch=$token_epoch$ + 604800 | index=someIndex earliest=$token_epoch$ latest=latestEpoch  
 helper.get_arg(“interval”) is not working with me.   I used helper.get_input_stanza() to retrieve the stanza information as a dict. for the dict you will find the interval value.   Thanks, Awni
I have a working dashboard that displays a number of metrics and KPIs for the previous week.  Today, I was asked to expand that dashboard to include a dropdown of all previous weeks over the last yea... See more...
I have a working dashboard that displays a number of metrics and KPIs for the previous week.  Today, I was asked to expand that dashboard to include a dropdown of all previous weeks over the last year   Using this query I was able to fill in my dashboard dropdown pretty easily | makeresults | eval START_EPOCH = relative_time(_time,"-1y@w1") | eval END_EPOCH = START_EPOCH + (60 * 60 * 24 * 358) | eval EPOCH_RANGE = mvrange(START_EPOCH, END_EPOCH, 86400 * 7) | mvexpand EPOCH_RANGE | eval END_EPOCH = EPOCH_RANGE + (86400 * 7) | eval START_DATE_FRIENDLY = strftime(EPOCH_RANGE, "%m/%d/%Y") | eval END_DATE_FRIENDLY = strftime(END_EPOCH, "%m/%d/%Y") | eval DATE_RANGE_FRIENDLY = START_DATE_FRIENDLY + " - " + END_DATE_FRIENDLY | table DATE_RANGE_FRIENDLY, EPOCH_RANGE | reverse Using this I get a dropdown with values such as  10/07/2024 - 10/14/2024 09/30/2024 - 10/07/2024   And so on, going back a year. Adding it to my search as a token has been more challenging though. Here's what I'm trying to do: index=someIndex earliest=$token_epoch$ latest=$token_epoch$+604800  Doing this I get "Invalid latest_time: latest_time must be after earliest_time."   I've seen some answers around here that involve running the search then using WHERE to apply earliest and latest.  I'd like to avoid that because the number of records that would have to pulled before I could filter on earliest and latest is in the many millions.  I've also considered using the timepicker but my concern there is the users who use this dashboard will pick the wrong dates.  I'd like to limit that by hardcoding the first and last days of the search via the dropdown. Is there a way to accomplish relative earliest and latest dates/times like this?
Hi @darkins , It is actually simple, as long as you are comfortable with regex syntax. It will be like this: | eval condition=case(match(_raw, "thisword"), "first_condition", match(_raw, "thi... See more...
Hi @darkins , It is actually simple, as long as you are comfortable with regex syntax. It will be like this: | eval condition=case(match(_raw, "thisword"), "first_condition", match(_raw, "thisotherword"), "second_condition", 1=1,"default_condition") | rex field=_raw "<rex_pattern>" if condition=="first_condition" | rex field=_raw "<rex_pattern>" if condition=="second_condition" | rex field=_raw "<rex_pattern>" if condition=="default_condition" Give it a try and let me know how it goes.
like in the subject, i am looking at events with different fields and delimeters i want to say if the event contains thisword then rex blah blah blah elseif the event contains thisotherword then rex... See more...
like in the subject, i am looking at events with different fields and delimeters i want to say if the event contains thisword then rex blah blah blah elseif the event contains thisotherword then rex blah blah blah i suspect this is simple but thought to ask
I think its a permission issue, Google Workspace user should have a “Organization Administrator” role. That’s the only requirement for the account. you account might be read only?
Hello @isoutamo missing double quotes parsing failing? looks like a bug to me. We had an old similar type bug sometime back on Splunk version6 .
Hello @yuanliu . I noticed an issue when I cloned my classic dashboards to Studio. Is this happening with brand new dashboards created in Studio? My 9.2.2 (Cloud) version has been verified to have th... See more...
Hello @yuanliu . I noticed an issue when I cloned my classic dashboards to Studio. Is this happening with brand new dashboards created in Studio? My 9.2.2 (Cloud) version has been verified to have the same issue when cloning from Classic to Studio. However, there are no issues with the original Studio dashboards.  
I have many Dashboard Studio dashboards by now.  Those created earlier function normally.  But starting a few days ago, new dashboard created cannot use "Open in Search" function any more; the magnif... See more...
I have many Dashboard Studio dashboards by now.  Those created earlier function normally.  But starting a few days ago, new dashboard created cannot use "Open in Search" function any more; the magnifying glass icon is greyed out.  "Show Open In Search Button" is checked.  Any insight?  My servers is 9.2.2 and there has been no change on server side.
I believe this feature is for UF only, code changes are made only for UF.
I'm having similar query but when using the below case .. onlt <=500 and >=1500 are getting counted in stats  duration =6000's are also getting counted in >1500ms case not sure why index=* | r... See more...
I'm having similar query but when using the below case .. onlt <=500 and >=1500 are getting counted in stats  duration =6000's are also getting counted in >1500ms case not sure why index=* | rex "TxDurationInMillis=(?<TxDurationInMillis>\d+)" | eval ResponseTime = tonumber(TxDurationInMillis) | eval ResponseTimeCase=case( ResponseTime <= 500, "<=500ms", ResponseTime > 500 AND ResponseTime <= 1000, ">500ms and <=1000ms", ResponseTime > 1000 AND ResponseTime <= 1400, ">1000ms and <=1400ms", ResponseTime > 1400 AND ResponseTime < 1500, ">1400ms and <1500ms", ResponseTime >= 1500, ">=1500ms" ) | table TxDurationInMillis ResponseTime ResponseTimeCase
You can apply EVENT BREAKER  settings on your props.conf.  Go to your app/local directory on your Deployment server. Create or edit props.conf file.  Update the EVENT_BREAKER with the appropr... See more...
You can apply EVENT BREAKER  settings on your props.conf.  Go to your app/local directory on your Deployment server. Create or edit props.conf file.  Update the EVENT_BREAKER with the appropriate regex pattern for your source. Typically, this is the same as your LINE_BREAKER regex. Reload the serverclass app on the Deployment server. Verify that the updated props.conf is successfully deployed to the Universal Forwarder. That should complete the setup.     Refer: https://community.splunk.com/t5/Getting-Data-In/How-to-apply-EVENT-BREAKER-on-UF-for-better-data-distribution/m-p/614423 Hope this helps.
Hi @Gregory.Burkhead, Have you had a chance to check out the reply from @Mario.Morelli? We're you able to find a solution that you could share or do you still need help?
Hi @Husnain.Ashfaq, Were you able to check out and try what was suggested by @Mario.Morelli? 
Apart from very specific cases of systems which have constant memory requirements, there is no way of telling how much resources you will need especially not knowing your load, your data and so on. ... See more...
Apart from very specific cases of systems which have constant memory requirements, there is no way of telling how much resources you will need especially not knowing your load, your data and so on. Having said that - there are general sizing hints there https://docs.splunk.com/Documentation/Splunk/latest/Capacity/Referencehardware Additionally, IMHO indexers should not use swap. At all. If you have to reach to disk for "memory", that means you're slowing your I/O which means you're building up your queues which means you're using up even more memory. That's a downhill path. (ok, you can have a very small swap to keep some sleeping system daemons out of active RAM but that's usually not worth the bother).
Hard to say without knowing your exact data and config. But Splunk does tend to try to guess  the time format sometimes and it's usually not the best idea to let it. So if you don't have timestamps i... See more...
Hard to say without knowing your exact data and config. But Splunk does tend to try to guess  the time format sometimes and it's usually not the best idea to let it. So if you don't have timestamps in your data it's best to explicitly configure your sourcetype so that Splunk doesn't guess but blindly assumes it's the current timestamp (as @gcusello already showed)
There is an add-on for Palo Alto solutions. https://splunkbase.splunk.com/app/7523 It is Splunk-supported so it should have a pretty decent manual.
1. Are you properly providing password and oldpassword? 2. Just for the sake of clarity - you're trying to update a local user, right?