All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Looks like you have some elements which are only partically removed from the code. Elements are defined in one array and the layout is in another array. You'll need to remove the erroneous reference... See more...
Looks like you have some elements which are only partically removed from the code. Elements are defined in one array and the layout is in another array. You'll need to remove the erroneous references to continue. This error isn't about the refresh, but the dashboard code not being valid. Clone the dashboard, or make a back up Note down the element IDs Open code view Use Find (CTRL-F, or your browser/OS shortcut) and enter each ID Remove all instances of the ID, careful not to delete other bits of code Post again when you've cleared out the errrored elements.
Hi @dataisbeautiful  I'm not deleting any element in the code. I'm trying to update the refresh option for whole dashboard. Currently it is 2m by default (from clone or somewhere else). But i want to... See more...
Hi @dataisbeautiful  I'm not deleting any element in the code. I'm trying to update the refresh option for whole dashboard. Currently it is 2m by default (from clone or somewhere else). But i want to either remove or give some higher value for refreshing. And the sample error is pops up for all panel ids. i've attached screenshot.
Hi @gcusello I tried both using spath and rex commands. Both haven’t worked. Using spath I didn’t get any stats. With rex, it shown 1317 stats but in results it is completely empty or you can say it ... See more...
Hi @gcusello I tried both using spath and rex commands. Both haven’t worked. Using spath I didn’t get any stats. With rex, it shown 1317 stats but in results it is completely empty or you can say it created an empty table.  please give me a full conmand which you feel is gonna work. index and search string has been already provided.
it helped but how can ensure that it's create severity = INFO field only when level=Information.
Hi Will, Thank you for your comment, upon checking the search index you sent, the last chance index has the highest count and the main has generated only 1 run. I just want to know what could be t... See more...
Hi Will, Thank you for your comment, upon checking the search index you sent, the last chance index has the highest count and the main has generated only 1 run. I just want to know what could be the issue why does the events are being redirected to the last chance index even though I declared the index before creating the input, like is there an extra step I've been missing to configure or to enable that's why it is directing to the last chance index instead to the index I've created?
Hi @livehybrid    Thanks for the response   | tstats summariesonly=true values(All_Traffic.dest) as dest dc(All_Traffic.dest) as count from datamodel=Network_Traffic where All_Traffic.dest_port!=... See more...
Hi @livehybrid    Thanks for the response   | tstats summariesonly=true values(All_Traffic.dest) as dest dc(All_Traffic.dest) as count from datamodel=Network_Traffic where All_Traffic.dest_port!="443" All_Traffic.dest_port!="80" All_Traffic.src_ip!="*:*" All_Traffic.src_ip!="5.195.243.8" ```cpx PT IP``` by All_Traffic.src_ip All_Traffic.dest_port | rename All_Traffic.src_ip as src All_Traffic.dest_port as dest_port | search NOT [| inputlookup internalip] | where count>=20 | iplocation src |  | eval severity="high" this is how one of the usecase looks like and recent notables have the urgency as below     when i check index-notable for this alert severity is showing as high
Hi @RSS_STT  The issue here is the source_key which is incorrectly set, it should be set to _raw, although _raw is the default so you could just remove that line entirely. You also do not need to s... See more...
Hi @RSS_STT  The issue here is the source_key which is incorrectly set, it should be set to _raw, although _raw is the default so you could just remove that line entirely. You also do not need to specify the naming of the extraction in the REGEX and instead use $1, so your resulting transform will look like:   [severity] REGEX = "level":\s\"(\w+) FORMAT = severity::"$1" WRITE_META = true   Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
i want to create new index time field severity if raw json payload have level field value is Information. { "level": "Information", "ORIGIN_Severity_name": "CRITICAL", "ProductArea": "Application", ... See more...
i want to create new index time field severity if raw json payload have level field value is Information. { "level": "Information", "ORIGIN_Severity_name": "CRITICAL", "ProductArea": "Application", "ORIGIN_Product": "Infrastructure"} What's wrong in my transforms.conf configuration. Any help much appreciated. transforms.conf [severity] REGEX = "level":\s\"(?<severity>\w+) SOURCE_KEY = fields:level FORMAT = severity::"INFO" WRITE_META = true  
Thank you for your response. How do I configure to limit an index to 80% and prevent data from being written in
It might be worth checking in index=main incase your "lastChanceIndex" is set to main. The other thing you could do is perform a search across all indexes  *however* I would generally advise against... See more...
It might be worth checking in index=main incase your "lastChanceIndex" is set to main. The other thing you could do is perform a search across all indexes  *however* I would generally advise against index=* searches - so do this sparingly!  | tstats count where index=* sourcetype="sourcetype::xxx" by index Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Hi @arunkuriakose  Ensure that the severity is not being overwritten by any other SPL or macros within your search, as sometimes macros within ES can contain a lot of logic and may by overwriting yo... See more...
Hi @arunkuriakose  Ensure that the severity is not being overwritten by any other SPL or macros within your search, as sometimes macros within ES can contain a lot of logic and may by overwriting your custom severity. It might help if you could share you search here. When you look at the raw notables generated, is the severity showing as high or medium? (Not just in the UI/ES Dashboards)  
tl;dr; It should continue writing but will drop the oldest data if configured correctly. In Splunk, the index size can indeed exceed the maximum limit if not properly configured, which can result in... See more...
tl;dr; It should continue writing but will drop the oldest data if configured correctly. In Splunk, the index size can indeed exceed the maximum limit if not properly configured, which can result in data deletion or suspending data writing. The critical settings to monitor are maxTotalDataSizeMB, frozenTimePeriodInSecs, homePath.maxDataSizeMB, and coldPath.maxDataSizeMB. maxTotalDataSizeMB: This setting specifies the maximum overall size (in MB) allowed for an index. When this limit is reached, Splunk automatically triggers a process that rolls the oldest data to the "frozen" state, which by default is set to deletion if coldToFrozenScript or coldToFrozenDir settings aren't configured. frozenTimePeriodInSecs: This defines the timeframe data can remain in the index before being frozen. Once the time elapses, the data is typically deleted unless alternative archiving options are specified. homePath.maxDataSizeMB: This setting controls the maximum size of the home path, encompassing hot and warm buckets. If this threshold is surpassed, older buckets move to cold storage. coldPath.maxDataSizeMB: This defines the maximum size for cold storage. Exceeding this limit results in the freezing of older buckets. If an index is reaching its limits frequently, consider evaluating your data volumes, review these configurations, and ensure that your Splunk setup can handle projected growth, to avoid unintended data loss or performance degradation. Splunk will start rolling data to frozen when either the  frozenTimePeriodInSecs or maxDataSizeMB is met, whichever comes first. This could mean that even if you expect 30 days of data, if there is only enough disk space for 10 days then it will start rolling to frozen (which may mean deletion) sooner than expected. If your disk space drops below the value set in server.conf/[diskUsage]/minFreeSpace (defaults to 5000mb) then it will stop letting you execute searches. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Will the index size exceed the maximum to delete data or stop writing data
Please provide more detail as to what exactly you are doing; your current description is too vague to determine how to advise you.
yes. correct. if i remove | search httpcode = 500, i can get the results
If you remove | search httpcode=500  from the python, does it return all the data as expected from the parent index?
| eval Comp=case(Dept=="IT","XXXX",Dept=="Bus","YYYY",Dept=="Pay","ZZZZ")
We have json fields to be auto extracted onto Splunk. We have some non json data to be removed and then auto extract the data. So I given following props.conf on my indexers - [sony_waf]  TIME_PRE... See more...
We have json fields to be auto extracted onto Splunk. We have some non json data to be removed and then auto extract the data. So I given following props.conf on my indexers - [sony_waf]  TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S LINE_BREAKER=([\r\n]+) SEDCMD-removeheader=s/^[^\{]*//g SHOULD_LINEMERGE = False INDEXED_EXTRACTIONS = JSON TRUNCATE = 20000   and this props.conf on my SH: [sony_waf] KV_MODE = none AUTO_KV_JSON = false   Props.conf on my UF: (which is there from before) [sony_waf] NO_BINARY_CHECK = true EVENT_BREAKER_ENABLE = true When I done this, duplicate events are populating. When I remove my INDEXED_EXTRACTIONS from indexers and keep it in UF props.conf... logs are not being ingested. Tried to give KV_MODE = json by removing KV_MODE and AUTO_KV_JSON in SH still the same duplication. completely confused here. Now even though I remove everything what I have given still duplicate logs coming. Checked in log path from source no duplicate logs are showing. even I have given crcsalt still the same issue. Please guide me to give the correct config in correct place...
import splunklib.client as client import splunklib.results as results   service = client.connect(         host="ipaddress",         port=portnumber,         username="username",         passwo... See more...
import splunklib.client as client import splunklib.results as results   service = client.connect(         host="ipaddress",         port=portnumber,         username="username",         password="password",         scheme = "https"         )                kwargs_blockingsearch = {"exec_mode": "normal",                             "earliest_time": "-15m",                             "latest_time": "now",                             "enable_lookups": "true"}     searchquery_blocking = '''search index=sample source="*sample*" AND host="v*lu*" OR host="s*mple*" | search httpcode=500 '''     job = service.jobs.create(searchquery_blocking, **kwargs_blockingsearch)       while True:         while not job.is_ready():             pass         if job["isDone"] =="1":             break     results = job.results(**{"output_mode": "json"}) print(results)