I need to find the distance from every point in a lookup from an event that has lat and long. I'm using the haversine app to calculate distance, and my current query is (sanitized):
| inputlookup...
See more...
I need to find the distance from every point in a lookup from an event that has lat and long. I'm using the haversine app to calculate distance, and my current query is (sanitized):
| inputlookup keylocations | appendcols [search index="{my_sourcetype}" country="us" sourcetype="confirmed" providence_state="{some_location}"] | haversine originField=location_origin units=mi outputField=dist lat lon
Currently, haversine is only providing the distance to the first item in the lookup. How do I get the query to provide the distance to all points of interest (POI)? Ultimately, I need to find the closest POI to the event; however, the data feed is from an external source and they don't normalize their providence_state information so I can't use matching in the lookup.
Here is my csv file (sanitized):
location_name;location_origin;location_state
POI1;12.3456,-12.3456;WA
POI2;12.3456,-12.3456;D.C.
and finally what my event looks like (also sanitized):
_time;providence_state;country;lat;lon;value
2020-03-12 08:26:49.528380;New Castle, DE;US;39.5393;-75.6674;1
If this can't be done in haversine, then can someone please help me find a better solution? Thanks in advance.
Hi there.
Should we have Indexers issue, or SearchHeads ones?
We have many many many (more than 200) scheduled savedsearches, interactive Dashboards running with automatic refreshes etc..
Recen...
See more...
Hi there.
Should we have Indexers issue, or SearchHeads ones?
We have many many many (more than 200) scheduled savedsearches, interactive Dashboards running with automatic refreshes etc..
Recently, i saw an high, very high delay over scheduling time, and dispatching the search...
_time savedsearch_name Scheduled_Time Dispath_Time Time_Diff
2020-03-12 16:15:19.941 Saved_Search1 03/12/2020 16:05:00 03/12/2020 16:15:19 10:19
2020-03-12 16:15:19.626 Saved_Search2 03/12/2020 16:05:00 03/12/2020 16:15:19 10:19
2020-03-12 16:15:19.446 Saved_Search3 03/12/2020 16:05:00 03/12/2020 16:15:18 10:18
2020-03-12 16:15:19.162 Saved_Search4 03/12/2020 16:05:00 03/12/2020 16:15:18 10:18
[...]
Can the system be improved? How?
Splunk Enterprise 7.0.0
SHs (3 nodes, clustered - no cpu issues)
Indexers (4 nodes, not clustered - some cpu issues, recently we add 2 vCPU per node, issues resolved)
Thanks.
Is there a way to enable "Add to Triggered Alerts" action via the rest api or CLI to all alerts in a custom application? Looking to enable this on over 100 alerts where it is currently not enabled a...
See more...
Is there a way to enable "Add to Triggered Alerts" action via the rest api or CLI to all alerts in a custom application? Looking to enable this on over 100 alerts where it is currently not enabled and hoping we don't have to do it manually through the gui.
Example:
Fetch VPN user details from one search and use the username to get details like email addresses from another search.
index=## host= ## sourcetype="##" source="#.log" eventtype=# paren...
See more...
Example:
Fetch VPN user details from one search and use the username to get details like email addresses from another search.
index=## host= ## sourcetype="##" source="#.log" eventtype=# parent session started
| table user host src_ip group
This lists details like:
user host src_ip group
bxxxx.gwwww x.x.x.x x.x.x.x Finance
I would like to add more details to the table like email address of the person and location which i can get from
index=@@ sourcetype=@@
Company: xyz
Employee_ID: aaa
Full_Legal_Name: Mr.ttt ccc
Future_Termination_TF: 0
Location: ddd
Primary_Work_Email: bxxxx.gwwww@xyz.com
How do I take the user details from the first search like ( bxxxx.gwwww) and match it to the second search to get the email address and other info?
The only partially matching value between 2 searches is the users name , there are no field matches between both searches.
I am creating a dashboard to show all Linux command line history per user and I would like to create an input where you can type the user and if it matches anything in a case statement, it assigns a ...
See more...
I am creating a dashboard to show all Linux command line history per user and I would like to create an input where you can type the user and if it matches anything in a case statement, it assigns a value to "source" and runs the search.
For example, I have two sources:
source=/root/.bash_history
source=/opt/splunk/.bash_history
I have a token $acct$ which holds the user that was typed in the input.
I wrote this search:
index=linux sourcetype=linux_cli
| eval search_source=case($acct$ == root, "/root/.bash_history", $acct$ == splunk, "/opt/splunk/.bash_history")
| search source=search_source
But this returns no results. How can I do this assignment during search?
I just noticed that our Redhat splunk servers are missing audit log data for users logging in to Splunk.
For example, this query no longer returns data:
index=_audit action="login attempt" "info=...
See more...
I just noticed that our Redhat splunk servers are missing audit log data for users logging in to Splunk.
For example, this query no longer returns data:
index=_audit action="login attempt" "info=succeeded"
I do have some audit data, just not the login attempts.
The data seems to of stopped after upgrading to version >=8.0.0
I only have one windows splunk server, and ALL the audit data appears to be there.
Hi Folks,
We have planned to upgrade from 7.2.4 to 8.0.2.
Splunk has mentioned that Test your apps compatability?
Is this is true only for Splunk apps downloaded from splunkbase Or user creat...
See more...
Hi Folks,
We have planned to upgrade from 7.2.4 to 8.0.2.
Splunk has mentioned that Test your apps compatability?
Is this is true only for Splunk apps downloaded from splunkbase Or user created apps such as parsing apps,extract apps so on...?
Also someone help me what are the basic stuffs we need to look upon when upgrading
Any helps are appreciated...
Pramodh
i am unable to execute the xmlprettyprint, getting below error running on 8.0.2 version.
command="xmlprettyprint", Error : Traceback: Traceback (most recent call last): File "D:\Program Files\Splu...
See more...
i am unable to execute the xmlprettyprint, getting below error running on 8.0.2 version.
command="xmlprettyprint", Error : Traceback: Traceback (most recent call last): File "D:\Program Files\Splunk\etc\apps\xmlutils\bin\xmlprettyprint.py", line 85, in r['raw'] = "Failed to parse: " + str(stack) + "\n" + r['_raw'] File "D:\Program Files\Splunk\Python-2.7\lib\UserDict.py", line 40, in __getitem_ raise KeyError(key) KeyError: '_raw'
Hi,
I am trying get the ThreadHunting app (https://splunkbase.splunk.com/app/4305/) up and running.
It references a TGZ file for various CSV lookup templates, but zjr TGZ file misses four of the ...
See more...
Hi,
I am trying get the ThreadHunting app (https://splunkbase.splunk.com/app/4305/) up and running.
It references a TGZ file for various CSV lookup templates, but zjr TGZ file misses four of the lookup files and I have not found any documentation for it so far that would explain which fields are needed in those files and where they come from.
(I did create files with headings that I gleaned from transforms.conf, but I assume that is not sufficient).
The missing ones are:
threathunting_dns_whitelist.csv
threathunting_file_create_whitelist.csv
threathunting_pipe_whitelist.csv
threathunting_wmi_whitelist.csv
Does anyone have working templates for those files?
Additionally, I see a pan_threat macro referencing a pan_logs index which I have no clue about.
Unfortunately the wiki is not that good in explaining the setup.
And on top of that there is a conflig for the eventcode lookup with the sysmon TA which is a prerequiste for this App?
Thanks for any pointers
afx
Hey guys,
I got some question regarding parsing queue issues I have been observing on our Heavy Forwarders. I am currently seeing between 500 and 1000 blocked events on each heavy forwarder daily ...
See more...
Hey guys,
I got some question regarding parsing queue issues I have been observing on our Heavy Forwarders. I am currently seeing between 500 and 1000 blocked events on each heavy forwarder daily when running:
index=_internal host=HF blocked=true
The total ratio of blocked events seems to be about 10% and they mostly all seem to appear in the aggqueue:
My main question is if this is reason for concern or what the impact on my current Splunk environment would be. Also why would all this blocking be in mainly one queue ?
Thank you,
Oliver
I have a JSON file.
Once I upload the file on the search head using the below stanza in props.conf it's indexed properly.
Splunk 7.3.4
[json_test]
CHARSET = UTF-8
DATETIME_CONFIG = CURREN...
See more...
I have a JSON file.
Once I upload the file on the search head using the below stanza in props.conf it's indexed properly.
Splunk 7.3.4
[json_test]
CHARSET = UTF-8
DATETIME_CONFIG = CURRENT
SEDCMD-cut_footer = s/\]\,\n\s*\"total\":.*$/g
SEDCMD-cut_header = s/^\{\n\s*\"matches\":\s\[/g
category = Structured
disabled = false
HEADER_FIELD_LINE_NUMBER = 3
SHOULD_LINEMERGE = 0
TRUNCATE = 0
INDEXED_EXTRACTIONS = json
KV_MODE = none
Once I upload the data from UF the data do not break to events
Universal Forwarder
props.conf
[json_test]
CHARSET = UTF-8
INDEXED_EXTRACTIONS = json
inputs.conf
[monitor:///tmp/*.json]
disabled = 0
sourcetype = json_test
index = test_hr
crcSalt = REINDEXMEPLEASE
initCrcLength = 780
Indexer
props.conf
[json_test]
DATETIME_CONFIG = CURRENT
SEDCMD-cut_footer = s/\]\,\n\s*\"total\":.*$/g
SEDCMD-cut_header = s/^\{\n\s*\"matches\":\s\[/g
category = Structured
disabled = false
HEADER_FIELD_LINE_NUMBER = 3
SHOULD_LINEMERGE = 0
TRUNCATE = 0
Search Head
props.conf
[json_test]
KV_MODE = none
Hi,
I have installed splunk forwarder 8.0.2 to send data to the splunk entreprise 7.3.0.
All day, multiple .err files are created on my client server. I want to monitor all of them. Sometimes...
See more...
Hi,
I have installed splunk forwarder 8.0.2 to send data to the splunk entreprise 7.3.0.
All day, multiple .err files are created on my client server. I want to monitor all of them. Sometimes .err files are empty, sometimes they are small heavy and sometimes they are very big.
In my splunk entreprise web interface i do not see all of my .err file. I can see .err files only from August 3 from October 23. Where is other files for today and other days ?
I had some crc length errors. Know i use the parameter crcSalt = <SOURCE>
Here is my inputs.conf :
[monitor:///prddata/JobOutput/*/*.err]
index=fileauxnfmerrorlogs
sourcetype=fcravd10logfile
crcSalt = <SOURCE>
Here is my outputs.conf:
[tcpout:splunkdev]
server=sapoxt3.os.amadeus.net:9997
How can i troobleshoot this ?
Regards,
We've recently migrated from 12 indexers per site on a slower storage array to 24 indexers per site on much faster storage arrays. Since the move we have seen IO throughput on indexer luns peak at ar...
See more...
We've recently migrated from 12 indexers per site on a slower storage array to 24 indexers per site on much faster storage arrays. Since the move we have seen IO throughput on indexer luns peak at around 6 - 8 GB/s, per site - for anywhere between 5 and 30 minutes. When that happens we start getting throttled by the storage array and latency goes up (as expected). We'd like to dig into the queries that are running at this time and see if we can do something about them (delete them, rewrite them, add datamodels, etc).
It's pretty easy to query the _internal index for sourcetype=scheduler and look at runtimes, etc. However, that doesn't give us an indication of how many buckets or slices were required to be examined by the indexers in order to satisfy the search.
Does anyone have recommendations, example searches, etc, that we can use to dig into this?
How to write a rex query for table inside table for the below case
"studentInfo": {
"name": "Apple",
"id": "57",
"batch": "2006",
"subjects": {
"subject1": "English...
See more...
How to write a rex query for table inside table for the below case
"studentInfo": {
"name": "Apple",
"id": "57",
"batch": "2006",
"subjects": {
"subject1": "English"
}
}
index=schoolIndex sourcetype=dev studentInfo | rex field=_raw "\"contentversions\":(?.*)}+" | spath input=message | table name id subjects
Hello, everyone!
I need to understand how much power usage my pc,servers,network devices (consumption). I want to monitor it with Splunk
Where can i find data for this case.
Thx!
Hi! I'm trying to create a search that would return unique values in a record, but in one list.
The search "basesearch | table scn*" would come up with a table where I have values across scn01 t...
See more...
Hi! I'm trying to create a search that would return unique values in a record, but in one list.
The search "basesearch | table scn*" would come up with a table where I have values across scn01 to scn20. So what I want to do is make a unique list of values combined into one column, of all of the fields values. I don't need to preserve the previous field name.
How might I do that?
Thanks!
Stephen
Hello,
I have this query
| loadjob savedsearch="myquery"
| where (strftime(_time, "%Y-%m-%d") >= "2020-02-26") AND (strftime(_time, "%Y-%m-%d") <= "2020-03-03") and STEP=="Click"
| bucket...
See more...
Hello,
I have this query
| loadjob savedsearch="myquery"
| where (strftime(_time, "%Y-%m-%d") >= "2020-02-26") AND (strftime(_time, "%Y-%m-%d") <= "2020-03-03") and STEP=="Click"
| bucket _time span=1d
|stats min(_time) as _time by MESSAGE
|where MESSAGE = "337668c2-162c-4f4f-bda9-92f7816f2752" OR MESSAGE = "46095117-4dcb-4ebc-9906-8c23f1a1a26b" OR MESSAGE = "60eb62a4-c54a-4fc0-9aaa-17726ff62929" OR MESSAGE = "8b5e055c-17ab-4135-8b90-1fbc65032792"
Now i want to count the MESSAGE by _time
This is what i have as result
And this is what i want
Thanks for help
Hi,
I have the Smart PDF exporter app, the pdf download works well. But when I download a panel as .png file, it comes up blank.
Any help would be appreciated.
Thanks
I am trying get the max count for the yesterday's but along with this i need to display the date in the report for yesterdays date?
Kindly help me to get the date in the results along with the exi...
See more...
I am trying get the max count for the yesterday's but along with this i need to display the date in the report for yesterdays date?
Kindly help me to get the date in the results along with the existing results.
Query: sourcetype="x" name = "any" | bin _time span=1s | stats count by logtime | stats max(count)
Output for the above query is :
max(count)
34
Thanks In Advance