All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If your system doesn't accept text DoW denotation such as Mon, Tue, you can use numeric.  In most systems, week starts from Sunday as 0. 0 0 * * 1/2 run my alert Here is from man 5 crontab Step va... See more...
If your system doesn't accept text DoW denotation such as Mon, Tue, you can use numeric.  In most systems, week starts from Sunday as 0. 0 0 * * 1/2 run my alert Here is from man 5 crontab Step values can be used in conjunction with ranges. Following a range with ``/<number>'' specifies skips of the number's value through the range. For example, ``0-23/2'' can be used in the hours field to specify command execution every other hour (the alternative in the V7 standard is ``0,2,4,6,8,10,12,14,16,18,20,22''). Steps are also permitted after an asterisk, so if you want to say ``every two hours'', just use ``*/2''. (Of course, my manpage also states  day of week 0-7 (0 or 7 is Sun, or use names)  
so if I am running 9.3.1 and Tenable is still flagging this what was the solution or is there a fix for this not to show up in the scan?
The SmartStore cache has no effect on the freezing of data. If the cache becomes full, data will be evicted from the cache to make room for new data.
I have same problem with Amazon Linux 2023 (Kernel 6.1) the reason is Splunk Enterprise not support Linux kernel 6 and I notice Splunk have no problem with Amazon Linux 2023 (Kernel 5).   The rea... See more...
I have same problem with Amazon Linux 2023 (Kernel 6.1) the reason is Splunk Enterprise not support Linux kernel 6 and I notice Splunk have no problem with Amazon Linux 2023 (Kernel 5).   The reason is Splunk does not support the OS (Linux Kernel) that you used.  
Restarting did get rid of the message. Thanks for the reminder that a restart is necessary after every change.
Greetings, I found some useful savedsearches under SA-AccessProtection / DA-ESS-AccessProtection, which I am interested in using. However, I'd like to understand these use-cases before making them l... See more...
Greetings, I found some useful savedsearches under SA-AccessProtection / DA-ESS-AccessProtection, which I am interested in using. However, I'd like to understand these use-cases before making them live.   Are these apps and their content documented somewhere? So far, I have not had any luck.   Thanks!
I have a dashboard that a specific team uses. Today, they asked about why one of the panels was broken. Looking into it, we were receiving this error from the search:     Error in 'fit' command: E... See more...
I have a dashboard that a specific team uses. Today, they asked about why one of the panels was broken. Looking into it, we were receiving this error from the search:     Error in 'fit' command: Error while fitting "StateSpaceForecast" model: timestamps not continuous: at least 33 missing rows, the earliest between "2024-01-20 07:00:00" and "2024-01-20 09:00:00", the latest between "2024-10-02 06:00:00" and "2024-10-02 06:00:01"     That seemed pretty straight forward, I thought we might be missing some timestamp values. This is the query we are running:     |inputlookup gslb_query_last505h.csv | fit StateSpaceForecast "numRequests" holdback=24 forecast_k=48 conf_interval=90 output_metadata=true period=120     Looking into the CSV file itself, I went to look for missing values under the numRequests column. We have values for each hour going back for almost a year. The timestamps mentioned in the error look like: Looking at that SS now, There is an hour missing there. The timestamp for 08:00. That may be the cause. How would I go about efficiently finding the 33 missing values? Each value missing would be in-between any two hours. Will I have to go through and find skipped hours among 8k results in the CSV file?    Thanks for any help. 
It captured 265 lines with a total of 26,591 characters.  The original sp text size is 38,841. 
You may find this link helpful: https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/ChecktheintegrityofyourSplunksoftwarefiles
This isn't valid.
Hi @hazem , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @gcusello  many thanks ,appreciate your support
Json is a structured format so the order of fields should not matter for the recipient. After all you will be addressing the fields by their names. Unless you're manipulating that json on the receivi... See more...
Json is a structured format so the order of fields should not matter for the recipient. After all you will be addressing the fields by their names. Unless you're manipulating that json on the receiving end using something not fit for json processing. In that case maybe json is not the best format choice.
Hi @hazem , it depends on your Use cases: what do you want to monitor? use a log level that gives you the data you need, I cannot say to you from outside what's the best log level. In general, exc... See more...
Hi @hazem , it depends on your Use cases: what do you want to monitor? use a log level that gives you the data you need, I cannot say to you from outside what's the best log level. In general, except some situations, I'd avoid the debug level and I'd use Alert level, but, as I said, it depends on your Use Cases. In addition, this is a question for Palo Alto experts not Splunk because they know the contents of each log level. Ciao. Giuseppe
Thank you @gcusello for your reply. Our customer has asked me about the recommended log level that should be sent, for example, from Palo Alto to Splunk. Do you have any answer for this?"
Hi @hazem , for many products there are some guides for integration with Splunk developed by the same third party vendor and the only way is to search on Google (e.g for Sophos you can see at https:... See more...
Hi @hazem , for many products there are some guides for integration with Splunk developed by the same third party vendor and the only way is to search on Google (e.g for Sophos you can see at https://partnernews.sophos.com/it-it/2021/05/prodotti/splunk-integration-for-sophos-firewall/). Anyway, searching "Splunk Getting data in"  you have a guide to Datas ingestion in Splunk: https://docs.splunk.com/Documentation/Splunk/latest/Data/WhatSplunkcanmonitor  . At least, the first approach to data ingestion should be identify the technology to ingest and searching the related Add-On in apps.splunk.com, that usually guides users to integration. Ciao. Giuseppe
Hi, the website behind the link STEP | Splunk Training and Enablement Platform: Course and Class Details shows me only classes that cost 750 USD. Do I have to apply any code during the booking proces... See more...
Hi, the website behind the link STEP | Splunk Training and Enablement Platform: Course and Class Details shows me only classes that cost 750 USD. Do I have to apply any code during the booking process?  Furthermore I'm getting following error message when I try to book the class.    
Is there any guide on how to configure security products to send their logs to Splunk or what are the recommended logs that should be sent, like the DSM guide in QRadar?
I think it is not possible to change the order of the fields with tojson command. But try to create a json object with eval and json_object function maybe that accomplish your goal.   ...your sear... See more...
I think it is not possible to change the order of the fields with tojson command. But try to create a json object with eval and json_object function maybe that accomplish your goal.   ...your search... |eval my_json_object=("timestamp",timestamp,"Subject",Subject,"emailBody",emailBody,"operation",operation)  
Hi @Alex_Rus , please try this regex in the blacklist option, (?ms)EventCode: 4624.*Account Name: MY_Account Name.*Source Network Address: MY_Source Network Address otherwise, please try it in the... See more...
Hi @Alex_Rus , please try this regex in the blacklist option, (?ms)EventCode: 4624.*Account Name: MY_Account Name.*Source Network Address: MY_Source Network Address otherwise, please try it in the transpose.conf (on Indexers) to filter events. Ciao. Giuseppe