Hi @Adpafer, if you can find a regex to identify one or both the data flows you can create two stanzas in all the configuration files. If you cannot, you could use the App I hinted before because i...
See more...
Hi @Adpafer, if you can find a regex to identify one or both the data flows you can create two stanzas in all the configuration files. If you cannot, you could use the App I hinted before because it uses a search. Ciao. Giuseppe
Splunk has a manual for that. See https://docs.splunk.com/Documentation/Splunk/9.1.0/Indexer/Backupindexeddata In a nutshell, hot data is rolled to warm then all data (except new hot buckets) are b...
See more...
Splunk has a manual for that. See https://docs.splunk.com/Documentation/Splunk/9.1.0/Indexer/Backupindexeddata In a nutshell, hot data is rolled to warm then all data (except new hot buckets) are backed up while Splunk remains up. Yes, new data is missed by the backup, but it will be backed up next time. There's a good discussion on the topic at https://community.splunk.com/t5/Deployment-Architecture/How-to-back-up-hot-buckets/m-p/104780
The workaround is to bring up the Enterprise Console, and stop and start the Controller from there, instead of using command line commands. We have since migrated to a SaaS server, so we no longer ha...
See more...
The workaround is to bring up the Enterprise Console, and stop and start the Controller from there, instead of using command line commands. We have since migrated to a SaaS server, so we no longer have an on-prem controller.
Hi,
while importing custom modules (e.g. `from logger import Logger`) in the splunkd.log we are able to see `ModuleNotFoundError: No module named 'logger'` error and this is generated by this fi...
See more...
Hi,
while importing custom modules (e.g. `from logger import Logger`) in the splunkd.log we are able to see `ModuleNotFoundError: No module named 'logger'` error and this is generated by this file "/opt/splunk/lib/python3.7/site-packages/splunk/persistconn/appserver.py" .We suspect somehow library is not able to identify internal modules and hence throwing error.
We are also able to see warning `DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses` in same log file.
This issue seems to be on splunk version 9.0.x. For splunk version 8.2.x it is working fine. As we have noticed main difference in these 2 versions is python 2.7 support is removed from version 9.0.x.
We will like to know the possible solution in solving this error.
Thanks for the quick reply! I've also added the following to the end: | search location=3 OR location=2
| eval status=if(location=2,"Waiting...","Completed")
| table message status This now lists ...
See more...
Thanks for the quick reply! I've also added the following to the end: | search location=3 OR location=2
| eval status=if(location=2,"Waiting...","Completed")
| table message status This now lists all of my defined tasks and tells me whether the task has run or not, based on whether the event is returned by the search. How do include wildcards? The task from my indexed data looks like this: "task_a has run successfully with return code x after y minutes" My lookup task is simply "task_a has run successfully" So I'd like the search to allow for task_a has run successfully*
index=one OR index=two | rex field=_raw "^[0-9-:.\s]*\w+\s+(?<task>.*)" | stats count as index_count by task
| eval location = 1
| append
[|inputlookup tasks.csv | eval location = 2 ]
| stats sum(l...
See more...
index=one OR index=two | rex field=_raw "^[0-9-:.\s]*\w+\s+(?<task>.*)" | stats count as index_count by task
| eval location = 1
| append
[|inputlookup tasks.csv | eval location = 2 ]
| stats sum(location) as location by task
| fillnull value=0 index_count If location = 1, the task is in the indexes but not in the lookup If location = 2, the task is in the lookup but not in the indexes If location = 3, the task is in both the lookup and the indexes
There is no need to go back to the addon builder the app came from. As long as the app runs in a Splunk instance, this command can be used to generate an .spl that can be easily imported into any Spl...
See more...
There is no need to go back to the addon builder the app came from. As long as the app runs in a Splunk instance, this command can be used to generate an .spl that can be easily imported into any Splunk instance with Addon Builder: sudo /opt/splunk/bin/splunk package app <PACKAGENAME> Splunk username: Splunk password: All that is required is a Splunk UI account with admin rights from which the app is installed and is to be exported
Hi,
I'm trying to create a table that contains a list of tasks. The list is static and stored in a lookup table called tasks.csv.
So far I have the following search:
index=one OR index=...
See more...
Hi,
I'm trying to create a table that contains a list of tasks. The list is static and stored in a lookup table called tasks.csv.
So far I have the following search:
index=one OR index=two | rex field=_raw "^[0-9-:.\s]*\w+\s+(?<task>.*)" | stats count by task
| search [|inputlookup tasks.csv ]
This creates me a table that looks like this:
task
count
task_a
1
task_b
1
task_c
1
task_d
1
However, if a task in my static list does not appear in the search results, it does not show in the table. I want the table to contain the whole list of tasks, regardless of whether they appear in the search results or not.
i.e.
task
count
task_a
1
task_b
1
task_c
1
task_d
1
task_e
0
task_f
0
Any ideas on how I can do this?
The closest I've got is using a join.. which does work, but does not allow for a wildcard, meaning I'd need to specify the whole 'task'.
|inputlookup tasks.csv
| join type=left task [ | search index=one OR index=two | rex field=_raw "^[0-9-:.\s]*\w+\s+(?<task>.*)" | stats count by task]
| fillnull value=0 task
| table task count
Would appreciate any thoughts or suggestions. Thanks in advance.
You can do this in one alert but it gets a bit messy - you would probably be better off using two alerts with different schedules, time periods and alert criteria
What is the pattern? Please describe it in more detail. (Regular expressions work by finding patterns but you have to be able to precisely describe the pattern.)
I have a Splunk universal forwarder installed. The Splunk Enterprise is seeing the forwarder, now I want to send network firewall logs to host forwarder to be sent to Enterprise platform.
I'm trying to produce an architecture diagram of our Splunk environment and I want to know what each of our universal forwarders and heavy forwarders are ingesting and sending. I'm looking in inputs ...
See more...
I'm trying to produce an architecture diagram of our Splunk environment and I want to know what each of our universal forwarders and heavy forwarders are ingesting and sending. I'm looking in inputs and outputs.conf but they are of no use. Is there a way to view what each forwarder is ingesting and sending, whether that be via the command line or in Splunk itself?
I want a condition like when it is severity=ERROR then show its received payload event and if it has sync/C2V event then it is COO error and if it does not have that then it is RDR error.Is there any...
See more...
I want a condition like when it is severity=ERROR then show its received payload event and if it has sync/C2V event then it is COO error and if it does not have that then it is RDR error.Is there any way please help me in this. Thanks
Hello
I'm using Splunk cloud, i have jenkins logs indexed to my system but for some reason breaks
I took an output example and add it to Splunk with the "Add Data" option and there it looks ok bu...
See more...
Hello
I'm using Splunk cloud, i have jenkins logs indexed to my system but for some reason breaks
I took an output example and add it to Splunk with the "Add Data" option and there it looks ok but when im searching for the sourcetype it is still broken.
What is the best way to parse jenkins logs ?
this is my sourcetype configuration :
[ console_logs ]
CHARSET=UTF-8
LINE_BREAKER=([\r\n]+)
NO_BINARY_CHECK=true
SHOULD_LINEMERGE=true
category=Structured
disabled=false
pulldown_type=true
and i want it to be shown with the bulks :
<time> Started by user
<time> Finished:
Also, can we define 2 different search run interval in this query ? like below--- index=ABC sourcetype=XYZ login |stats count |where count =0 between23:00 to 07:00, search can be run every after 2...
See more...
Also, can we define 2 different search run interval in this query ? like below--- index=ABC sourcetype=XYZ login |stats count |where count =0 between23:00 to 07:00, search can be run every after 2 hours to check last 2 hours events AND index=ABC sourcetype=XYZ login |stats count |where count <=20 between 07:00 to 23:00, , search can be run every after 1 hours to check last 1 hours events