If that is your search, you should be getting an error! Is the search relevant to the count you want i.e. should the count be based on the results of a working search, or from the index, or from par...
See more...
If that is your search, you should be getting an error! Is the search relevant to the count you want i.e. should the count be based on the results of a working search, or from the index, or from part of the search?
Just to be clear, you are thinking of previous 2-week average by hour of day, not previous 2 weekday average. Correct? index="xyz" sourcetype="abc" app_name="123" or "456" earliest=-15d@d latest=n...
See more...
Just to be clear, you are thinking of previous 2-week average by hour of day, not previous 2 weekday average. Correct? index="xyz" sourcetype="abc" app_name="123" or "456" earliest=-15d@d latest=now | rex field=msg "\"[^\"]*\"\s(?<status>\d+)"
| eval current_day = strftime(now(), "%A")
| eval log_day = strftime(_time, "%A")
```| where current_day == log_day```
| eval hour=strftime(_time, "%H")
| eval current_hour = strftime(now(), "%H")
| where hour <= current_hour
| eval day=strftime(_time, "%d")
| stats count by hour day HTTP_STATUS_CODE
| chart avg(count) as average by hour HTTP_STATUS_CODE Note you cannot have | where current_day == log_day and still get average across multiple days.
this is my search: | mstats avg("value1) prestats=true WHERE "index"="my_index" span=10s BY host | timechart avg("value1") span=10s useother=false BY host WHERE max in top5 which is working fine....
See more...
this is my search: | mstats avg("value1) prestats=true WHERE "index"="my_index" span=10s BY host | timechart avg("value1") span=10s useother=false BY host WHERE max in top5 which is working fine. I just want to create a new alert that triggered when the host count is less then 3. how can I do that?
It is not clear what you have actually tried and what is "not working". Please provide your full search, anonymised as necessary, and show how it is not working.
What do you mean by "logging to _internal"? Normally in _internal index you'll find... well, internal events coming from the forwarder itself - metrics, forwarder errors and such. And you should have...
See more...
What do you mean by "logging to _internal"? Normally in _internal index you'll find... well, internal events coming from the forwarder itself - metrics, forwarder errors and such. And you should have them from all 5 forwarders. But if you see your windows eventlogs contents in _internal - that's a big misconfiguration. Either someone put index=_internal into inputs.conf defining your windows eventlog inputs (but why would someone do that???) or you have some strange redirecting mechanics defined in your environment by which the events end up in that index. But it's higly unlikely. There is also a third option - your _internal index is set as a lastChanceIndex (which is a wrong setting - this one should point to a normal - non-_internal - index) and your inputs are misconfigured and try to write to a non-existent index.
I had a situation when the customer wanted to ingest windows eventlogs forwarded by some strange third-party "eventlog-to-syslog" solution. In this case it was not json but some key=value pairs but t...
See more...
I had a situation when the customer wanted to ingest windows eventlogs forwarded by some strange third-party "eventlog-to-syslog" solution. In this case it was not json but some key=value pairs but the idea is the same - there is so much work needed to properly process it afterwards and it would require a lot of development to prepare everything to "mirror" the TA_windows settings. So we told the customer that if there is no other way, of course we can ingest the data so it will be searchable "somehow" in case there is a need for finding something in the raw events but we will not even attempt to make it "compatible" with normal windows logs. It makes no sense.
Hello friends! I get JSON like this {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} and so on ... { [-] key: 29.09.2023 value_cnt: 2736 value_sum: 51150570.59 } аnd r...
See more...
Hello friends! I get JSON like this {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} and so on ... { [-] key: 29.09.2023 value_cnt: 2736 value_sum: 51150570.59 } аnd row_source like this 10/4/23 1:23:03.000 PM {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} Show syntax highlighted host = app-damu.hcb.kz source = /opt/splunkforwarder/etc/apps/XXX/pays_7d.sh sourcetype = damu_pays_7d And i want to get table like this: days sum cnt 27.09.2023 35476232.82 2338 29.09.2023 51150570.59 2736 so i have to get latest events and put it to table. Please help
No. You can't do it _during installation_ (and how would you want to do that in the simplest case when you just unpack an archive?). You can, however, do it as a next step right after the installatio...
See more...
No. You can't do it _during installation_ (and how would you want to do that in the simplest case when you just unpack an archive?). You can, however, do it as a next step right after the installation - you just have to create a user-seed.conf file. See https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/User-seedconf
Right, so basically I was mistaken in remembering you could opt to ingest Windows eventlog as JSON using the standard Splunk setup I would really prefer not to have it delivered as JSON, though f...
See more...
Right, so basically I was mistaken in remembering you could opt to ingest Windows eventlog as JSON using the standard Splunk setup I would really prefer not to have it delivered as JSON, though for this partucualre case there is no option. It's JSON or nothing, it is already converted on the "sender side". While not what I hoped for of course it does answer my question and there is no "shortcut". Well just have to solve it "the hard way" Tanks
Hi all, I just wanted to ask if there is the possibility to pass username and password when starting splunk forwarder (9.1.1) on a linux system for the first time. Due to Windows Universal Forwarde...
See more...
Hi all, I just wanted to ask if there is the possibility to pass username and password when starting splunk forwarder (9.1.1) on a linux system for the first time. Due to Windows Universal Forwarder Installation this is possible during installation via: msiexec.exe /i splunkforwarder_x64.msi AGREETOLICENSE=yes SPLUNKUSERNAME=SplunkAdmin SPLUNKPASSWORD=Ch@ng3d! /quiet Is there a similar command for Linux Forwarder? Best regards
One panel can only use one type of drilldown. There can be several ways to hack something out. The most effective is to build a custom Web application to redirect the browser to whichever end URL i...
See more...
One panel can only use one type of drilldown. There can be several ways to hack something out. The most effective is to build a custom Web application to redirect the browser to whichever end URL it outputs based on the variables you send it. Alternative, you may settle with a lesser workaround: drilldown to a custom search (or a dashboard, or even just change another panel on the same dashboard) to display the URL you intend to send the browser to; then ask your user to copy and paste.
I have configured 5 domain controllers to send log to Splunk by installing UF. I have DC2 and DC5 reporting to Winevenlog as it is configured but I am missing the other 3 DCs. All logging to _inter...
See more...
I have configured 5 domain controllers to send log to Splunk by installing UF. I have DC2 and DC5 reporting to Winevenlog as it is configured but I am missing the other 3 DCs. All logging to _internal what should I do to correct the logging.
Hi, I have this command: | mstats avg("value1) prestats=true WHERE "index"="my_index" span=10s BY host | timechart avg("value1") span=10s useother=false BY host WHERE max in top5 and I would lik...
See more...
Hi, I have this command: | mstats avg("value1) prestats=true WHERE "index"="my_index" span=10s BY host | timechart avg("value1") span=10s useother=false BY host WHERE max in top5 and I would like to count the host and trigger when I have less then 3 hosts. I tired something like that: ```|stats dc(host) as c_host | where c_host > 3,``` but its not working as usual . any idea? thanks!
You got it backwards. strptime can get you _time into the real time value so you can use timechart. Why name the variable time instead of _time? timechart command only works with field _time. | i...
See more...
You got it backwards. strptime can get you _time into the real time value so you can use timechart. Why name the variable time instead of _time? timechart command only works with field _time. | inputlookup 7days_Trail.csv
| eval _time=strptime(_time, "%FT%H:%M:%S.%Q:%z")
| timechart avg(*) as * You can replace avg with any stats function that suits your need.
Hi @_pravin if you have a minor retention of your data, you can search data on the data model, but if you want to have a drilldown on raw data, it's possible only for a minor period. Usually it's ...
See more...
Hi @_pravin if you have a minor retention of your data, you can search data on the data model, but if you want to have a drilldown on raw data, it's possible only for a minor period. Usually it's the contrary: search on data model on a minor or equal period than raw. Ciao. Giuseppe