All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Did you manage to solve this, seeing the same error on 9.1.1 while trying to investigate a license violation using a Developer License.
It is not clear what you have actually tried and what is "not working". Please provide your full search, anonymised as necessary, and show how it is not working.
Yeah, that's basically what I'm worried about Thank you for the feedback
What do you mean by "logging to _internal"? Normally in _internal index you'll find... well, internal events coming from the forwarder itself - metrics, forwarder errors and such. And you should have... See more...
What do you mean by "logging to _internal"? Normally in _internal index you'll find... well, internal events coming from the forwarder itself - metrics, forwarder errors and such. And you should have them from all 5 forwarders. But if you see your windows eventlogs contents in _internal - that's a big misconfiguration. Either someone put index=_internal into inputs.conf defining your windows eventlog inputs (but why would someone do that???) or you have some strange redirecting mechanics defined in your environment by which the events end up in that index. But it's higly unlikely. There is also a third option - your _internal index is set as a lastChanceIndex (which is a wrong setting - this one should point to a normal - non-_internal - index) and your inputs are misconfigured and try to write to a non-existent index.
Lets say I have a table of two fields. and some of the cells are empty. How do I find the number of empty cells using "addcoltotals"
I had a situation when the customer wanted to ingest windows eventlogs forwarded by some strange third-party "eventlog-to-syslog" solution. In this case it was not json but some key=value pairs but t... See more...
I had a situation when the customer wanted to ingest windows eventlogs forwarded by some strange third-party "eventlog-to-syslog" solution. In this case it was not json but some key=value pairs but the idea is the same - there is so much work needed to properly process it afterwards and it would require a lot of development to prepare everything to "mirror" the TA_windows settings. So we told the customer that if there is no other way, of course we can ingest the data so it will be searchable "somehow" in case there is a need for finding something in the raw events but we will not even attempt to make it "compatible" with normal windows logs. It makes no sense.
Hello friends! I get JSON like this {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} and so on ... { [-]    key: 29.09.2023    value_cnt: 2736    value_sum: 51150570.59 } аnd r... See more...
Hello friends! I get JSON like this {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} and so on ... { [-]    key: 29.09.2023    value_cnt: 2736    value_sum: 51150570.59 } аnd row_source like this 10/4/23 1:23:03.000 PM   {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} Show syntax highlighted host = app-damu.hcb.kz source = /opt/splunkforwarder/etc/apps/XXX/pays_7d.sh sourcetype = damu_pays_7d   And i want to get table like this: days sum cnt 27.09.2023 35476232.82 2338 29.09.2023 51150570.59 2736   so i have to get latest events and put it to table. Please help
No. You can't do it _during installation_ (and how would you want to do that in the simplest case when you just unpack an archive?). You can, however, do it as a next step right after the installatio... See more...
No. You can't do it _during installation_ (and how would you want to do that in the simplest case when you just unpack an archive?). You can, however, do it as a next step right after the installation - you just have to create a user-seed.conf file. See https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/User-seedconf
Right, so basically I was mistaken in remembering you could opt to ingest Windows eventlog as JSON using the standard Splunk setup I would really prefer not to have it delivered as JSON, though f... See more...
Right, so basically I was mistaken in remembering you could opt to ingest Windows eventlog as JSON using the standard Splunk setup I would really prefer not to have it delivered as JSON, though for this partucualre case there is no option. It's JSON or nothing, it is already converted on the "sender side". While not what I hoped for of course it does answer my question and there is no "shortcut". Well just have to solve it "the hard way" Tanks
Hi all, I just wanted to ask if there is the possibility to pass username and password when starting splunk forwarder (9.1.1) on a linux system for the first time. Due to Windows Universal Forwarde... See more...
Hi all, I just wanted to ask if there is the possibility to pass username and password when starting splunk forwarder (9.1.1) on a linux system for the first time. Due to Windows Universal Forwarder Installation this is possible during installation via:     msiexec.exe /i splunkforwarder_x64.msi AGREETOLICENSE=yes SPLUNKUSERNAME=SplunkAdmin SPLUNKPASSWORD=Ch@ng3d! /quiet       Is there a similar command for Linux Forwarder?   Best regards
One panel can only use one type of drilldown.  There can be several ways to hack something out.  The most effective is to build a custom Web application to redirect the browser to whichever end URL i... See more...
One panel can only use one type of drilldown.  There can be several ways to hack something out.  The most effective is to build a custom Web application to redirect the browser to whichever end URL it outputs based on the variables you send it.  Alternative, you may settle with a lesser workaround: drilldown to a custom search (or a dashboard, or even just change another panel on the same dashboard) to display the URL you intend to send the browser to; then ask your user to copy and paste.
I have configured 5 domain controllers to send log to Splunk by installing UF. I have DC2 and DC5 reporting to Winevenlog as it is configured but I am missing the other 3 DCs. All logging to _inter... See more...
I have configured 5 domain controllers to send log to Splunk by installing UF. I have DC2 and DC5 reporting to Winevenlog as it is configured but I am missing the other 3 DCs. All logging to _internal what should I do to correct the logging.
Hi, I have this command:  | mstats avg("value1) prestats=true WHERE "index"="my_index" span=10s BY host | timechart avg("value1") span=10s useother=false BY host WHERE max in top5 and I would lik... See more...
Hi, I have this command:  | mstats avg("value1) prestats=true WHERE "index"="my_index" span=10s BY host | timechart avg("value1") span=10s useother=false BY host WHERE max in top5 and I would like to count the host and trigger when I have less then 3 hosts.  I tired something like that: ```|stats dc(host) as c_host | where c_host > 3,``` but its not working as usual .   any idea? thanks!  
You got it backwards.  strptime can get you _time into the real time value so you can use timechart.  Why name the variable time instead of _time?  timechart command only works with field _time. | i... See more...
You got it backwards.  strptime can get you _time into the real time value so you can use timechart.  Why name the variable time instead of _time?  timechart command only works with field _time. | inputlookup 7days_Trail.csv | eval _time=strptime(_time, "%FT%H:%M:%S.%Q:%z") | timechart avg(*) as * You can replace avg with any stats function that suits your need. 
i @Choi_Hyun , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @_pravin  if you have a minor retention of your data, you can search data on the data model, but if you want to have a drilldown on raw data, it's possible only for a minor period. Usually it's ... See more...
Hi @_pravin  if you have a minor retention of your data, you can search data on the data model, but if you want to have a drilldown on raw data, it's possible only for a minor period. Usually it's the contrary: search on data model on a minor or equal period than raw. Ciao. Giuseppe 
Thank @yuanliu . I had missed "%FT%" & ":z" when i tried.  @ITWhisperer Here are few things i have tried till now :  1.  | inputlookup 7days_Trail.csv | eval time=strptime(_time, "%FT%H:%M:%S.%Q:%... See more...
Thank @yuanliu . I had missed "%FT%" & ":z" when i tried.  @ITWhisperer Here are few things i have tried till now :  1.  | inputlookup 7days_Trail.csv | eval time=strptime(_time, "%FT%H:%M:%S.%Q:%z") | table time 2xx 4xx 5xx After using the above query, the data looks like below: and the graph looks like where time was not getting updated/populated.  2. Since _time was not getting populated even after formatting, i used table command directly. Looks like its working. Can you please confirm if I can use this approach ?   
Hi @sarit_s , as I said, I don't know if the solution is acceptable for you, this is a workaround because it isn't possible to group from more than one field. Ciao. Giuseppe
Hi, I'm trying to plot graph for previous 2 weekday average. Below is the query used index="xyz" sourcetype="abc" app_name="123" or "456" earliest=-15d@d latest=now | rex field=msg "\"[^\"]*\"\s(?<... See more...
Hi, I'm trying to plot graph for previous 2 weekday average. Below is the query used index="xyz" sourcetype="abc" app_name="123" or "456" earliest=-15d@d latest=now | rex field=msg "\"[^\"]*\"\s(?<status>\d+)" | eval HTTP_STATUS_CODE=case(like(status, "2__"),"2xx") | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | where current_day == log_day | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%d") | stats count by hour day HTTP_STATUS_CODE | chart avg(count) as average by hour HTTP_STATUS_CODE  This plots grpah for complete 24hrs.  I wanted to know if I can limit the graph to current timestamp. Say now system time is 11AM. I want graph to be plotted only upto 11AM and not entire 24hrs. Can it be done ? Please advice
Very good this is what I was looking for. Thank you. Do you know how I can now color each cell depending on the status code? Usually I use the following configuration in the dashboard <format type... See more...
Very good this is what I was looking for. Thank you. Do you know how I can now color each cell depending on the status code? Usually I use the following configuration in the dashboard <format type="color" field="status"> <colorPalette type="expression">case(value like "5%","#D6563C",value like "4%","#F2B827",value like "3%","#A2CC3E",value like "2%","#65A637",true(),null)</colorPalette> </format>   but it is not working now (I suppose because of the transpose command).