Hi When I run the command below, it works fine index=toto event_id=4688 |
eval file_name=if(event_id==4688, replace(NewProcessName, "^*\\\\([^\\\\]+)$","\\1"),null) Now I need to combine th...
See more...
Hi When I run the command below, it works fine index=toto event_id=4688 |
eval file_name=if(event_id==4688, replace(NewProcessName, "^*\\\\([^\\\\]+)$","\\1"),null) Now I need to combine this search with a subearch index=toto event_id=4688
| eval file_name=if(event_id==4688, replace(NewProcessName, "^*\\\\([^\\\\]+)$","\\1"),null)
[| inputlookup test where software=pm
| table pm
|rename pm as file_name
| format]
| stats values(file_name) as file_name..... But i have the message "Error in "EvalCommand": The expression is malformed What is wrong please?
Hello Team,
I need to use the predict command but currently i have only 110 data events therefore to have more data points i am trying to add mock data with only time field which is different. Also...
See more...
Hello Team,
I need to use the predict command but currently i have only 110 data events therefore to have more data points i am trying to add mock data with only time field which is different. Also in my dataset i have only MonthYear field and data collected from March month of this year. I read about repeat function and dataset literal can we use it in this scenario
Quarter
Subscription ID
Subscription name
Azure service
Azure region
Usage
MonthYear
Qtr 1 020b3b0c-5b0a-41a1-8cd7-90cbd63e06 SUB-PRD-EDL Azure Data Factory West 9,10E-12 March 2023 Qtr 1 020b3b0c-5b0a-41a1-8cd7-90cbd63e06 SUB-PRD-EDL Azure Data Factory West 0 March 2023 Qtr 1 020b3b0c-5b0a-41a1-8cd7-90cbd63e06 SUB-PRD-EDL Azure Data Factory West 4,40303E-09 March 2023
Hello, Can anyone help me to extract the below file name which is OU_..... from the below raw data.
12:04:19.85 14/09/2023 directory="E:\data\Test" ECHO is off.
Volume in drive E is Data Vol...
See more...
Hello, Can anyone help me to extract the below file name which is OU_..... from the below raw data.
12:04:19.85 14/09/2023 directory="E:\data\Test" ECHO is off.
Volume in drive E is Data Volume Serial Number is 7808-CA1B
Directory of E:\data\Test 13/09/2023 13:22
<DIR> XXX\xxxx . 13/09/2023 13:22 <DIR> xxx\xxx .. 12/09/2023 09:31 95 xxx\xxx dir_details.bat 13/09/2023 13:41 171 xxx\xxx dir_details_copy.bat 07/09/2023 13:26 0 xxx\xxx edsadsad.txt 07/09/2023 13:26 22 xxx\xxx OU_kljdajdklsajkdl.zip 07/09/2023 13:26 22 xxx\xxx OU_kljdajdklsajkewew.zip 07/09/2023 13:26 22 xxx\xxx OU_kljdajdklsajkewewdsads.zip 6 File(s) 332 bytes 2 Dir(s) 20718067712 bytes free
Hello, guys I want change my universal forward for new deployment_server,how to use Current deployment server。 I am currently pushing the app for universal forwarder, but can‘t change deployment_server
Hi all, We have a new built set up for Splunk Enterprise situated in the Temporary location, we are looking to perform the Data center migration from the Temporary location to permanent location....
See more...
Hi all, We have a new built set up for Splunk Enterprise situated in the Temporary location, we are looking to perform the Data center migration from the Temporary location to permanent location. We want to know the behavior of Splunk Enterprise Installed on the component servers has any impact with the Change of IP address with the DC migration?
Hello, I have installed sysmon and I try to send it with a UniversalForwarder on that machine to my Splunk-Indexer and Search-Head... I have tryed to add [WinEventLog://Microsoft-Windows-Sy...
See more...
Hello, I have installed sysmon and I try to send it with a UniversalForwarder on that machine to my Splunk-Indexer and Search-Head... I have tryed to add [WinEventLog://Microsoft-Windows-Sysmon/Operational]
disabled = 0
[WinEventLog://"Applications and Services Logs/Microsoft/Windows/Sysmon/Operational"]
disabled = 0
[WinEventLog://Applications and Services Logs/Microsoft/Windows/Sysmon/Operational]
disabled = 0 to the inputs.conf, but non of that versions worked... I have also restarted the UniversalForwarder and the Indexer / Search-Head has the Sysmom app installed. What am I doing wong?! PS.: Sysmon is running and I see the logged data in the Eventviewer of that machine...
Hi Team, I have below query: index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(se...
See more...
Hi Team, I have below query: index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCStatus="ebnc event balanced successfully"|dedup EBNCStatus | table EBNCStatus True I am deduping my EBNC status so when I am selecting date Filter as yesterday its showing one count but when I am selecting 7 days from date filter still showing one count. I want when I select 7 its should show 7 count . Can someone help me with this,
Hello I have this simple imput that stopped working after renaming the sourcetype from linux server -> indexers [monitor:///opt/splunk_connect_for_kafka/kafka_2.13-3.5.1/logs/connect.log] disable...
See more...
Hello I have this simple imput that stopped working after renaming the sourcetype from linux server -> indexers [monitor:///opt/splunk_connect_for_kafka/kafka_2.13-3.5.1/logs/connect.log] disabled = false index = _internal sourcetype = kafka_connect_log I restarted the universal forwarder many times, but it is not helping. Any other troubleshooting steps?
Hi Guys, I'm trying to figure out what are the prerequisite to validate the splunk like Running Service Name / Application Name in Control Panel / and Registry path.
Hi, How can we normalize MAC addresses (such as XX:XX:XX:XX:XX:XX or XX-XX-XX-XX-XX-XX) in our environment before implementing the asset and identity in splunk ES, as we are collecting data from wor...
See more...
Hi, How can we normalize MAC addresses (such as XX:XX:XX:XX:XX:XX or XX-XX-XX-XX-XX-XX) in our environment before implementing the asset and identity in splunk ES, as we are collecting data from workspace.
Hi there, I am trying to make a statistic graph in my dashboard using the search below. | mstats rate(vault.runtime.total_gc_pause_ns.value) as gc_pause WHERE `vault_telemetry` AND cluster=* AND...
See more...
Hi there, I am trying to make a statistic graph in my dashboard using the search below. | mstats rate(vault.runtime.total_gc_pause_ns.value) as gc_pause WHERE `vault_telemetry` AND cluster=* AND (host=*) BY host span=5m
| timechart max(gc_pause) AS iowait bins=1000 BY host
| eval warning=3.3e7, critical=8.3e7 **Note that the search below comes from the pre-defined dashboard template but it is not working as is in my environment. In my Splunk, when I do a mpreview of my index `vault_telemetry` I am getting a result like the below: metric_name:vault.hostname1.runtime.total_gc_pause_ns metric_name:vault.hostname2.runtime.total_gc_pause_ns metric_name:vault.hostname3.runtime.total_gc_pause_ns metric_name:vault.hostname3.runtime.total_gc_pause_ns metric_name:vault.hostname4.runtime.total_gc_pause_ns If I modify the pre-defined search from the template using the below I can get the result however, I can only do it on one hostname. | mstats rate(vault.hostname1.runtime.total_gc_pause_ns) as gc_pause WHERE `vault_telemetry` span=5m
| timechart max(gc_pause) AS iowait bins=1000
| eval warning=3.3e7, critical=8.3e7 I would like to have all the hostname shows on my single panel. Can someone please able to assist and help me with the correct search index I need to use?
Hello Splunk community, I am new as splunk administrator here in the company and a few days ago i received the requirement to upgrade the splunk version. We have splunk 8.2.6 and the minimum versio...
See more...
Hello Splunk community, I am new as splunk administrator here in the company and a few days ago i received the requirement to upgrade the splunk version. We have splunk 8.2.6 and the minimum version required is 8.2.12, I'm not sure how big is the risk in upgrading process as we need to be sure the information in indexers is going to be safe and splunk must be operational. i have read some of the upgrading documentation to version 9.0.6 but as is said i am not sure the best option with the minimum risk. Do you have any advice? Thank you!
Hello Splunkers,
Can someone help me with a query to detect multiple http errors from single IP , basically when the status code is in 400s/500s.
Thank you,
regards,
Moh
I am trying to parse some data for API latency. I have a value for "elapsedTime" which spits that out. However if a request takes longer than 999ms, then it changes to reporting in seconds. So... the...
See more...
I am trying to parse some data for API latency. I have a value for "elapsedTime" which spits that out. However if a request takes longer than 999ms, then it changes to reporting in seconds. So... the query below could return 999ms or 1.001s. What eval statement do I need here to parse the value of elapsedTime and if it contains "s" and not "ms", then * 1000 to get a value in ms... | NEED SOME EVAL HERE I GUESS | stats min(elapsedTime) as Fastest max(elapsedTime) as Slowest avg(elapsedTime) as "Average"
I am looking to audit any user that uploads to splunk through the User interface or REST API
After doing some investigation I have found that the endpoints /services/app/local is the REST API endp...
See more...
I am looking to audit any user that uploads to splunk through the User interface or REST API
After doing some investigation I have found that the endpoints /services/app/local is the REST API endpoint that can be used to post an application. I was wondering whether splunk internally posts to that API when you utilise the GUI so by auditing that log you can get both use cases.
I have crafted the below search to isolate these events and confirmation that this works would be awesome!
index=_internal sourcetype=splunkd_access /services/apps/local method=POST
Appreciate all assistance.
This is a bit of a long shot, but I was curious to get the community's input. Today, I realized that both Slack and PasteBin use "codemirror" to handle their web code editor / syntax highlighting. ...
See more...
This is a bit of a long shot, but I was curious to get the community's input. Today, I realized that both Slack and PasteBin use "codemirror" to handle their web code editor / syntax highlighting. With PasteBin, I had to examine the page source to confirm. With Slack, you can confirm it here: https://slack.com/libs/webapp I figured I would submit a feature request to codemirror to see if they could add a "language mode" for Splunk. However, my issue was immediately closed with the response that codemirror does not implement new language modes, and it would be better implemented via a separate package. So I guess someone will have to create and maintain a language mode for codemirror for others to use if they want SPL support. Unfortunately, I do not have the experience to do this. But looking around, it appears GraphQL built their own codemirror language code package...so I was thinking...even if it's a long shot, maybe I can send the idea to Splunk and see what happens. Where would be the appropriate place to send this suggestion in to Splunk to see if that's something they'd be interested in implementing? I tried checking the ideas submission, but there are no categories where this idea would fit. I think it would be awesome if one day we could have Splunk syntax highlighting support in Slack (and also PasteBin, but there's a lot less people using that lol).
Hello, I have three search query below that I want to combine the three metric name sum into one total count. Can someone able to assist how I can write my query? First Query: | mstats sum(vault....
See more...
Hello, I have three search query below that I want to combine the three metric name sum into one total count. Can someone able to assist how I can write my query? First Query: | mstats sum(vault.token.creation.nonprod) as count where index=vault_metrics span=1h | timechart sum(count) as count span=1h | fillnull value=0 | eventstats perc90(count) perc50(count) Second Query: | mstats sum(vault.token.creation.dev) as count where index=vault_metrics span=1h | timechart sum(count) as count span=1h | fillnull value=0 | eventstats perc90(count) perc50(count) Third Query: | mstats sum(vault.token.creation.nonprod_preprod) as count where index=vault_metrics span=1h | timechart sum(count) as count span=1h | fillnull value=0 | eventstats perc90(count) perc50(count)