Install the FortGate add-on (https://splunkbase.splunk.com/app/2846) on your UF and your Splunk indexers and search head(s). That page will have installation instructions.
Hi @jhilton90, you can have the information about the UF only if it's reading the local logs, otherwise you cannot have this information and never about HFs. I asked this feature to Splunk Ideas (h...
See more...
Hi @jhilton90, you can have the information about the UF only if it's reading the local logs, otherwise you cannot have this information and never about HFs. I asked this feature to Splunk Ideas (https://ideas.splunk.com/ideas/EID-I-1731) and it's "Under consideration", if you're interested, vote for it! Ciao. Giuseppe
Hi @jejohnson, Fortinet Fortigate sends its logs using syslog, so you have two choices: use a Universal Forwarder with a syslog server (betyer solution), Use an Heavy Forwarder (doesn't need a sy...
See more...
Hi @jejohnson, Fortinet Fortigate sends its logs using syslog, so you have two choices: use a Universal Forwarder with a syslog server (betyer solution), Use an Heavy Forwarder (doesn't need a syslog server). Using the first solutin you should configure a very little machine (also 2/4 CPUs and 4/8 GB RAM) with Linux and an rsyslog (or syslog-ng) server that writes the received syslogs in text files. Then you can use the Universal Forwarder to read the files and send them to the Indexers. In the UF, you have to install also the Fortinet Fortigate Add-On for Splunk (https://splunkbase.splunk.com/app/2846) to parse the logs. This Add-On must also be installed on the Search Heads and eventually also on intermediate Heavy Forwarders (if present). Plus: This solution requires a less performant server and permits to write logs even if the Splunk UF is down. Minus: Requires manual configurations of the rsyslog and the UF. The second solution, it's easier to configure because you can do everything bu GUI, but requires a more performant server (at least 8/12 CPUs and 8/12 GB RAM). This solution is prefeable if you already have an Heavy Forwarder. In the HF, you have to install also the Fortinet Fortigate Add-On for Splunk (https://splunkbase.splunk.com/app/2846) to parse the logs. The Add-On must also be installed on the Search Heads and eventually also on intermediate Heavy Forwarders (if present). Plus: easier to implement. Minus: it requires a more performant server and doesn't ingest logs when Splunk is down. In both the solutions, it's better to have two receivers and a Load Balancer to avoid a Single Point of Failure in case of maintenance or fail of the server- Ciao. Giuseppe
Unfortunately, this is not relevant for this specific case, as it's not possible to run any simple search in the dashboards. But when trying to run the same searches in a dashboard built with dash...
See more...
Unfortunately, this is not relevant for this specific case, as it's not possible to run any simple search in the dashboards. But when trying to run the same searches in a dashboard built with dashboard-studio I now immediately get an error message instead of infinite waiting for data: "Search new_test_user_bmV3X3Rlc3RfdXNlcg__search__RMD5149cadac0aee6cd6_1693919913.13912 not found. The search may have been cancelled while there are still subscribers."
I want to use free cloud trial, I have done everything but my access instance option is not enabling, what should I do, Pls refer below screenshot and help me. Thank you. @suyogpk_11
Hi @Adpafer, if you can find a regex to identify one or both the data flows you can create two stanzas in all the configuration files. If you cannot, you could use the App I hinted before because i...
See more...
Hi @Adpafer, if you can find a regex to identify one or both the data flows you can create two stanzas in all the configuration files. If you cannot, you could use the App I hinted before because it uses a search. Ciao. Giuseppe
Splunk has a manual for that. See https://docs.splunk.com/Documentation/Splunk/9.1.0/Indexer/Backupindexeddata In a nutshell, hot data is rolled to warm then all data (except new hot buckets) are b...
See more...
Splunk has a manual for that. See https://docs.splunk.com/Documentation/Splunk/9.1.0/Indexer/Backupindexeddata In a nutshell, hot data is rolled to warm then all data (except new hot buckets) are backed up while Splunk remains up. Yes, new data is missed by the backup, but it will be backed up next time. There's a good discussion on the topic at https://community.splunk.com/t5/Deployment-Architecture/How-to-back-up-hot-buckets/m-p/104780
The workaround is to bring up the Enterprise Console, and stop and start the Controller from there, instead of using command line commands. We have since migrated to a SaaS server, so we no longer ha...
See more...
The workaround is to bring up the Enterprise Console, and stop and start the Controller from there, instead of using command line commands. We have since migrated to a SaaS server, so we no longer have an on-prem controller.
Hi,
while importing custom modules (e.g. `from logger import Logger`) in the splunkd.log we are able to see `ModuleNotFoundError: No module named 'logger'` error and this is generated by this fi...
See more...
Hi,
while importing custom modules (e.g. `from logger import Logger`) in the splunkd.log we are able to see `ModuleNotFoundError: No module named 'logger'` error and this is generated by this file "/opt/splunk/lib/python3.7/site-packages/splunk/persistconn/appserver.py" .We suspect somehow library is not able to identify internal modules and hence throwing error.
We are also able to see warning `DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses` in same log file.
This issue seems to be on splunk version 9.0.x. For splunk version 8.2.x it is working fine. As we have noticed main difference in these 2 versions is python 2.7 support is removed from version 9.0.x.
We will like to know the possible solution in solving this error.
Thanks for the quick reply! I've also added the following to the end: | search location=3 OR location=2
| eval status=if(location=2,"Waiting...","Completed")
| table message status This now lists ...
See more...
Thanks for the quick reply! I've also added the following to the end: | search location=3 OR location=2
| eval status=if(location=2,"Waiting...","Completed")
| table message status This now lists all of my defined tasks and tells me whether the task has run or not, based on whether the event is returned by the search. How do include wildcards? The task from my indexed data looks like this: "task_a has run successfully with return code x after y minutes" My lookup task is simply "task_a has run successfully" So I'd like the search to allow for task_a has run successfully*
index=one OR index=two | rex field=_raw "^[0-9-:.\s]*\w+\s+(?<task>.*)" | stats count as index_count by task
| eval location = 1
| append
[|inputlookup tasks.csv | eval location = 2 ]
| stats sum(l...
See more...
index=one OR index=two | rex field=_raw "^[0-9-:.\s]*\w+\s+(?<task>.*)" | stats count as index_count by task
| eval location = 1
| append
[|inputlookup tasks.csv | eval location = 2 ]
| stats sum(location) as location by task
| fillnull value=0 index_count If location = 1, the task is in the indexes but not in the lookup If location = 2, the task is in the lookup but not in the indexes If location = 3, the task is in both the lookup and the indexes
There is no need to go back to the addon builder the app came from. As long as the app runs in a Splunk instance, this command can be used to generate an .spl that can be easily imported into any Spl...
See more...
There is no need to go back to the addon builder the app came from. As long as the app runs in a Splunk instance, this command can be used to generate an .spl that can be easily imported into any Splunk instance with Addon Builder: sudo /opt/splunk/bin/splunk package app <PACKAGENAME> Splunk username: Splunk password: All that is required is a Splunk UI account with admin rights from which the app is installed and is to be exported
Hi,
I'm trying to create a table that contains a list of tasks. The list is static and stored in a lookup table called tasks.csv.
So far I have the following search:
index=one OR index=...
See more...
Hi,
I'm trying to create a table that contains a list of tasks. The list is static and stored in a lookup table called tasks.csv.
So far I have the following search:
index=one OR index=two | rex field=_raw "^[0-9-:.\s]*\w+\s+(?<task>.*)" | stats count by task
| search [|inputlookup tasks.csv ]
This creates me a table that looks like this:
task
count
task_a
1
task_b
1
task_c
1
task_d
1
However, if a task in my static list does not appear in the search results, it does not show in the table. I want the table to contain the whole list of tasks, regardless of whether they appear in the search results or not.
i.e.
task
count
task_a
1
task_b
1
task_c
1
task_d
1
task_e
0
task_f
0
Any ideas on how I can do this?
The closest I've got is using a join.. which does work, but does not allow for a wildcard, meaning I'd need to specify the whole 'task'.
|inputlookup tasks.csv
| join type=left task [ | search index=one OR index=two | rex field=_raw "^[0-9-:.\s]*\w+\s+(?<task>.*)" | stats count by task]
| fillnull value=0 task
| table task count
Would appreciate any thoughts or suggestions. Thanks in advance.
You can do this in one alert but it gets a bit messy - you would probably be better off using two alerts with different schedules, time periods and alert criteria
What is the pattern? Please describe it in more detail. (Regular expressions work by finding patterns but you have to be able to precisely describe the pattern.)
I have a Splunk universal forwarder installed. The Splunk Enterprise is seeing the forwarder, now I want to send network firewall logs to host forwarder to be sent to Enterprise platform.