Hi,
I'm trying to create a filter based on a threshold value that is unique for some objects and fixed for the others.
index=main | loopup thresholds_table.csv object output threshold | ...
See more...
Hi,
I'm trying to create a filter based on a threshold value that is unique for some objects and fixed for the others.
index=main | loopup thresholds_table.csv object output threshold | where number > threshold
The lookup contains something like:
object
threshold
chair
20
pencil
40
The problem here is that no all objects are inside the lookup, so I want to fix a threshold number for all other objects, for example I want to fix a threshold of 10 for every object except for those inside the lookup.
I tried these things without success:
index=main | loopup thresholds_table.csv object output threshold | eval threshold = coalesce(threshold, 10) | where number > threshold
index=main | fillnull value=10 threshold | loopup thresholds_table.csv object output threshold | where number > threshold
index=main | eval threshold = 10 | loopup thresholds_table.csv object output threshold | where number > threshold
The objective is identify when an object reach an X average value, except for those objects that have a higher average value.
I am trying to create a timeline dashboard that shows the number of events for a specific user over the last 7 days (x-axis being _time and y-axis being the number of events). We do not have a field ...
See more...
I am trying to create a timeline dashboard that shows the number of events for a specific user over the last 7 days (x-axis being _time and y-axis being the number of events). We do not have a field option for individual users yet. The syntax I have here will show a nice timeline from Search in Splunk but when I try to create a dashboard line chart for it, I either get nothing or mismatching info. Syntax I use for search: index="myindex1" OSPath="C:\\Users\\Snyder\\*".
Hi, we are using Splunk ES with notable events and suppressions. For sake of completeness, we have alerts that produce notable and some of these notable can be suppressed (through Splunk ES). So, in...
See more...
Hi, we are using Splunk ES with notable events and suppressions. For sake of completeness, we have alerts that produce notable and some of these notable can be suppressed (through Splunk ES). So, in the "Incident Review" section we are able to see all the notables for which there are no suppressions. We are trying to send the same set (i.e. all the notables for which there are no suppressions). We tried to add the action "send to soar" in one of the alerts that produce notables but in this way we obtain that all the notables (even the one suppressed) arrive on the soar. Do you know if there is a native feature (or quick way) to send all the notables for which there are no suppressions from Splunk to Splunk SOAR? Thank you in advance.
Hi @jhilton90, with the host field you should have the Universal Forwarder hostname, unless you manually configured a different host (e.g. when you're reading files in a syslog server). Ciao. Gius...
See more...
Hi @jhilton90, with the host field you should have the Universal Forwarder hostname, unless you manually configured a different host (e.g. when you're reading files in a syslog server). Ciao. Giuseppe
I'm totally and utterly new to splunk. Just ran the dockerhub sample, and followed the instructions: https://hub.docker.com/r/splunk/splunk/
I opened the search tab and most search commands seem to...
See more...
I'm totally and utterly new to splunk. Just ran the dockerhub sample, and followed the instructions: https://hub.docker.com/r/splunk/splunk/
I opened the search tab and most search commands seem to work fine. For example, the following command:
| from datamodel:"internal_server.server"
| stats count
Returns a count of 33350.
While this command:
| tstats count from datamodel:"internal_server.server"
as well as this one:
| tstats count
both return zero.
How can I get tstats working in this docker env with the sample datasets?
Install the FortGate add-on (https://splunkbase.splunk.com/app/2846) on your UF and your Splunk indexers and search head(s). That page will have installation instructions.
Hi @jhilton90, you can have the information about the UF only if it's reading the local logs, otherwise you cannot have this information and never about HFs. I asked this feature to Splunk Ideas (h...
See more...
Hi @jhilton90, you can have the information about the UF only if it's reading the local logs, otherwise you cannot have this information and never about HFs. I asked this feature to Splunk Ideas (https://ideas.splunk.com/ideas/EID-I-1731) and it's "Under consideration", if you're interested, vote for it! Ciao. Giuseppe
Hi @jejohnson, Fortinet Fortigate sends its logs using syslog, so you have two choices: use a Universal Forwarder with a syslog server (betyer solution), Use an Heavy Forwarder (doesn't need a sy...
See more...
Hi @jejohnson, Fortinet Fortigate sends its logs using syslog, so you have two choices: use a Universal Forwarder with a syslog server (betyer solution), Use an Heavy Forwarder (doesn't need a syslog server). Using the first solutin you should configure a very little machine (also 2/4 CPUs and 4/8 GB RAM) with Linux and an rsyslog (or syslog-ng) server that writes the received syslogs in text files. Then you can use the Universal Forwarder to read the files and send them to the Indexers. In the UF, you have to install also the Fortinet Fortigate Add-On for Splunk (https://splunkbase.splunk.com/app/2846) to parse the logs. This Add-On must also be installed on the Search Heads and eventually also on intermediate Heavy Forwarders (if present). Plus: This solution requires a less performant server and permits to write logs even if the Splunk UF is down. Minus: Requires manual configurations of the rsyslog and the UF. The second solution, it's easier to configure because you can do everything bu GUI, but requires a more performant server (at least 8/12 CPUs and 8/12 GB RAM). This solution is prefeable if you already have an Heavy Forwarder. In the HF, you have to install also the Fortinet Fortigate Add-On for Splunk (https://splunkbase.splunk.com/app/2846) to parse the logs. The Add-On must also be installed on the Search Heads and eventually also on intermediate Heavy Forwarders (if present). Plus: easier to implement. Minus: it requires a more performant server and doesn't ingest logs when Splunk is down. In both the solutions, it's better to have two receivers and a Load Balancer to avoid a Single Point of Failure in case of maintenance or fail of the server- Ciao. Giuseppe
Unfortunately, this is not relevant for this specific case, as it's not possible to run any simple search in the dashboards. But when trying to run the same searches in a dashboard built with dash...
See more...
Unfortunately, this is not relevant for this specific case, as it's not possible to run any simple search in the dashboards. But when trying to run the same searches in a dashboard built with dashboard-studio I now immediately get an error message instead of infinite waiting for data: "Search new_test_user_bmV3X3Rlc3RfdXNlcg__search__RMD5149cadac0aee6cd6_1693919913.13912 not found. The search may have been cancelled while there are still subscribers."
I want to use free cloud trial, I have done everything but my access instance option is not enabling, what should I do, Pls refer below screenshot and help me. Thank you. @suyogpk_11
Hi @Adpafer, if you can find a regex to identify one or both the data flows you can create two stanzas in all the configuration files. If you cannot, you could use the App I hinted before because i...
See more...
Hi @Adpafer, if you can find a regex to identify one or both the data flows you can create two stanzas in all the configuration files. If you cannot, you could use the App I hinted before because it uses a search. Ciao. Giuseppe
Splunk has a manual for that. See https://docs.splunk.com/Documentation/Splunk/9.1.0/Indexer/Backupindexeddata In a nutshell, hot data is rolled to warm then all data (except new hot buckets) are b...
See more...
Splunk has a manual for that. See https://docs.splunk.com/Documentation/Splunk/9.1.0/Indexer/Backupindexeddata In a nutshell, hot data is rolled to warm then all data (except new hot buckets) are backed up while Splunk remains up. Yes, new data is missed by the backup, but it will be backed up next time. There's a good discussion on the topic at https://community.splunk.com/t5/Deployment-Architecture/How-to-back-up-hot-buckets/m-p/104780
The workaround is to bring up the Enterprise Console, and stop and start the Controller from there, instead of using command line commands. We have since migrated to a SaaS server, so we no longer ha...
See more...
The workaround is to bring up the Enterprise Console, and stop and start the Controller from there, instead of using command line commands. We have since migrated to a SaaS server, so we no longer have an on-prem controller.