All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

SOURCE CODE | eventstats count(eval(errorCount=0)) AS passed, count(shortVIN) AS total | timechart span=1w@w0 eval((passed/total)*100) AS percentPassed   I get a message saying "expression has no f... See more...
SOURCE CODE | eventstats count(eval(errorCount=0)) AS passed, count(shortVIN) AS total | timechart span=1w@w0 eval((passed/total)*100) AS percentPassed   I get a message saying "expression has no fields" on the timechart. I am unsure which expression to use to display week 1's percentage, then week 2's percentage, and so on.    Any help is appreciated
This is my splunk query to find out the percentage usage of devices under testing in our lab within a specific time range. I'm able to get the output as expected but getting issue in using the select... See more...
This is my splunk query to find out the percentage usage of devices under testing in our lab within a specific time range. I'm able to get the output as expected but getting issue in using the selected same dropdown value in dashboard to feed it to two different fields which I'm using in search query from two different indexes. Resources{}.ReservationGroups and reservation_group are two different fields from two different indexes but both of them collect same values and I'm trying to feed this specific value from dropdown i.e "iOS1" to both the fields in order to get result for device utilization for particular reservation group. similarly If someone select any other value from dropdown it should be able to be feed into these 2 fields and provide the result accordingly.   index=X Resources{}.Agent.AgentName=agent* Resources{}.ReservationGroups=iOS1 | dedup device_symbolicname | table device_symbolicname, Resources{}.ReservationGroups | stats count by Resources{}.ReservationGroups | sort Resources{}.ReservationGroups | appendcols [search index=Y scheduler_url="*sched*" is_agent=false reservation_group=iOS1  | rename location as DUT, reservation_group as ReservationGroup | dedup DUT | stats count by ReservationGroup | rename count as pp_count | sort ReservationGroup] | eval pct_usage=count/pp_count*100  
I have an add-on running on a heavy forwarder that is using the name of the HF as the  host.  I'm trying to change the host to something  more useful.  All of the events are of the sourcetype rubrik:... See more...
I have an add-on running on a heavy forwarder that is using the name of the HF as the  host.  I'm trying to change the host to something  more useful.  All of the events are of the sourcetype rubrik:*. Here is a sample event: {"_time": "2022-03-07T23:31:00.000Z", "clusterName": "my-host", "locationId": "XXXmaskedXXX", "locationName": "XXXmaskedXXX", "type": "Outgoing", "value": 0} I would like to use "my-host" as the host.  This is what I'm trying with no success. props.conf: [rubrik:*] TRANSFORMS-myhost = hostoverride transforms.conf: [hostoverride] DEST_KEY = MetaData:Host REGEX = \"clusterName\"\:\s\"(.+)\".+ FORMAT = host::$1  
We've been indexing logs from our Barracuda Web Security Gateway via our syslog server with a default sourcetype of syslog. It works ok but doesn't pull out all the fields and field extraction is hit... See more...
We've been indexing logs from our Barracuda Web Security Gateway via our syslog server with a default sourcetype of syslog. It works ok but doesn't pull out all the fields and field extraction is hit or miss, as the logs aren't consistent. I've tried the various Barracuda apps and TA's on splunkbase, both current and archived, with no success. Has anyone else solved this problem?
  index=testlab sourcetype=testcsv | rex field="status detail" "(?<message_received_name>Messages Received)\\s*[0-9,]*\s*[0-9,]*\s*(?<message_received>[0-9,]*)" | rex field=message_received mode=s... See more...
  index=testlab sourcetype=testcsv | rex field="status detail" "(?<message_received_name>Messages Received)\\s*[0-9,]*\s*[0-9,]*\s*(?<message_received>[0-9,]*)" | rex field=message_received mode=sed "s/,//g" | eval myInt = tonumber(message_received) | reverse | delta myInt as message_received_delta | timechart span=10m sum(message_received_delta) by Hostname   the problem i find is that when i am doing only 1 hostname at a time. it works just fine. (note the data is incremental counters only). but when i introduce additional hostnames, i see some hostnames would show a negative value. it should only show positive numbers (0 to inifinity) again when i do single host, it works just fine. really need help on this one.
We are having an issue with our new 8.2.2 splunk instance any time there's a subsearch with a lot of data being searched (1,500,000+ events). It's any subsearch at all, whether it's a join, append, o... See more...
We are having an issue with our new 8.2.2 splunk instance any time there's a subsearch with a lot of data being searched (1,500,000+ events). It's any subsearch at all, whether it's a join, append, or regular subsearch. We get live results as the search is running, but when it finishes we get this error:   StatsFileWriterLz4 file open failed file=D:\Program Files\Splunk\var\run\splunk\srtemp\555759916_17936_at_1646839312.2\statstmp_merged_44.sb.lz4   The search job has failed due to an error. You may be able view the job in the Job Inspector   We observed this "srtemp"  directory while running a search live and we see a directory for the job being created (like the 555759916_17936_at_1646839312.2 above) with a bunch of temp files being populated inside of that. When smaller searches are finished, the directory and all of it's contents are successfully deleted and we get results as expected. With larger searches we get the error above and the folder is left behind, but all of the temp files inside the directory are successfully deleted. We have 9TB of free space on the drive the directory is in, so we definitely aren't running out of space.  We have an old splunk instance (7.3.0) that does not have this issue at all. In fact, when observing the srtemp directory, nothing is created at all. Clearly there is some key difference we are missing but we are not sure what. We've tried increasing various limits in limits.conf and tried switching the journal compression from Lz4 to GZip and nothing has worked. We are stumped and not sure what to do next. None of the internal logs tell us anything more than what the error in the search says. Any sort of insight on what to do next would be greatly appreciated!
Hi, Can anyone tell me the version of MySQL that's used by the console, controller and EUM servers in the latest version for On-Premise (21.4.12.24706). Can I also check what version of Java is inc... See more...
Hi, Can anyone tell me the version of MySQL that's used by the console, controller and EUM servers in the latest version for On-Premise (21.4.12.24706). Can I also check what version of Java is included in the latest release please. Thanks  Maurice
I tried the following query: index=alldata Application="AZ" |eval Date=strftime(_time,"%m/%d/%Y %H:%M:%S %Z") |table Date user username |rename user as User, username as id |dedup id |appendcols [s... See more...
I tried the following query: index=alldata Application="AZ" |eval Date=strftime(_time,"%m/%d/%Y %H:%M:%S %Z") |table Date user username |rename user as User, username as id |dedup id |appendcols [search index=infor |fields disName userPrin |table disName userPrin |rename disName as Name userPrin as Mail |dedup Mail ] |fields Date User id Mail Name |eval "Login Status"=case(id==Mail, "Logged in", id!=Mail, "Never Logged in") |eval Result=if(id=Mail, "Mail", "NULL") I would like to create a column in the table that compares values in column id and Mail  and lists unique values (non duplicate).
hi I  use a "link to the search" drilldown from a table panel  When I have a look to my xml, I have a lot of special characters     <drilldown> <link target="_blank">search?q=%60i... See more...
hi I  use a "link to the search" drilldown from a table panel  When I have a look to my xml, I have a lot of special characters     <drilldown> <link target="_blank">search?q=%60index_mesuresc%60%20sourcetype%3D%22ez%3Acitrix%22&amp;earliest=&amp;latest=</link> </drilldown>      as far as I know, we can use cdata to correct this? so I dont know how to use cdata tag for not displaying characters I have tried this but it doesnt works     <drilldown> <link target="_blank"><![CDATA[search?q=%60index_mesuresc%60%20sourcetype%3D%22ez%3Acitrix%22&amp;earliest=&amp;latest=]]></link> </drilldown>     Could you help please?
As I wrote few times already I have in my care a relatively strange environment - a quite big installation with RF=1. Yes, I know I don't have data resilence and high availability - the customers kne... See more...
As I wrote few times already I have in my care a relatively strange environment - a quite big installation with RF=1. Yes, I know I don't have data resilence and high availability - the customers knew it and accepted at the start of the project. But since we're approaching the upgrade and as I'm reading the upgrade instructions, some questions pop up. The normal procedure includes rolling upgrade of cluster member nodes. The rolling upgrade starts with splunk upgrade-init cluster-peers and ends with splunk upgrade-finalize cluster-peers (or proper calls to REST endpoints). Question is - what does those two commands really do and how it affects the RF=1 situation? As I asked before - it's pointless to put my cluster in maintenance mode and there is no bucket rebalancing after offline/online because there is nothing to rebalance. So do I have to bother with all this or can I simply take the indexers down one by one, upgrade and start them up again? Yes, I know I won't have full search capacity during the indexer's downtime - it's obvious that if the data is not there I can't search it and my searches would be incomplete. The customers knows it and we'll schedule a "partial downtime". What do you think?
Hi all, Does anyone know if it's possible to use Cyberark to rotate the Splunk SOAR admin account password? If it is, do you have any pointers to get this solution implemented? As always, any h... See more...
Hi all, Does anyone know if it's possible to use Cyberark to rotate the Splunk SOAR admin account password? If it is, do you have any pointers to get this solution implemented? As always, any help is most gratefully received, Mark.
We would like to monitor Spring Boots HikariCP Connection Pool using AppDynamics. We saw an possibility in doing so using JMX MBeans, but can't get it to work. The MBean in JConsole looks as follows:... See more...
We would like to monitor Spring Boots HikariCP Connection Pool using AppDynamics. We saw an possibility in doing so using JMX MBeans, but can't get it to work. The MBean in JConsole looks as follows:  And the JMX Metric Rule looks as follows: Did I make mistake in the configuration? Is there another way to monitor the connection pool?
Hi, I have this search:     | spath | rename object.* as * | spath path=events{} output=events | stats by timestamp, events, application, event_type, account_id, context.display_name, | mvexpand ... See more...
Hi, I have this search:     | spath | rename object.* as * | spath path=events{} output=events | stats by timestamp, events, application, event_type, account_id, context.display_name, | mvexpand events | eval _raw=events | kv | table timestamp, payload.rule_description, "context.display_name", account_id, "event_type", "application", "payload.rule_url" | rename account_id as "Account ID", timestamp as "Timestamp", context.display_name as "System", context.host_url as "Host URL", event_type as "Event Type", "title" as "Title", "application" as "Application", "payload.rule_url" as "URL"       I have a json with multiple `events,  inside this event  I have "payload.rule_description", but, some record, doesn't have this "payload.rule_description" object, so, I don't have the "payload.rule_description". How can I check if the record has the "payload.rule_description" if not, brings `event_type`  instead? Tried to use `eval title=if(payload.rule_description, payload.rule_description, event_type)`  doesn't work. Thanks
Microsoft Azure Add on for Splunk upgrading via deployment server wipes out all the inputs. Is there a way to preserve them? I always take a backup of the inputs and re-add but then you lose the look... See more...
Microsoft Azure Add on for Splunk upgrading via deployment server wipes out all the inputs. Is there a way to preserve them? I always take a backup of the inputs and re-add but then you lose the look back period. Thanks in advance
Hi, we have a directory with daily log files I want to read into Splunk 8.1.5: /dir1/dir2/dir3/dir4/file-20220309.log, file-20220308.log, ... Version A, working: "[monitor:///dir1/dir2/dir3/dir4... See more...
Hi, we have a directory with daily log files I want to read into Splunk 8.1.5: /dir1/dir2/dir3/dir4/file-20220309.log, file-20220308.log, ... Version A, working: "[monitor:///dir1/dir2/dir3/dir4]" Version B, working: "[monitor:///dir1/*/d*/dir4/*]" Version C, failing: "[monitor:///dir1/*/d*/dir4]" Version C would in theory match the example of "[monitor:///apache/*/logs]" in the documentation, wouldn't it? That is, as long as "logs" is a directory. Do I miss something here? Do I see a bug? Is there a limit on the number of wildcards in a path? Puzzled in Hamburg Volkmar
Hello, How can I report certain "Query Details" just like we do in the Dashboard, in hourly/daily basis? I tried to find a way around it by creating a dashboard with iFrame using the target URL, bu... See more...
Hello, How can I report certain "Query Details" just like we do in the Dashboard, in hourly/daily basis? I tried to find a way around it by creating a dashboard with iFrame using the target URL, but it contains a lot of un-needed contents, and I can't view any thing that needs scrolling. Going for replicating the "Query Details" in the dashboard is another idea, but creating a dashboard for each query the customer asks for will be painful as well.. Even the report is sent as a blank page! Is there any other way for this to be done? Attached a sample of the "Query Details" and the workaround I mentioned, and the received report.. Regards, Khalid.. The "Query Details": "Query Details" as URL in iFrame Dashboard: Received Report:
Hi, I have 2 timecharts where I need to show a TOTAL count across specified field values. The first timechart must show the total count over all field values and the 2nd timechart must show the tot... See more...
Hi, I have 2 timecharts where I need to show a TOTAL count across specified field values. The first timechart must show the total count over all field values and the 2nd timechart must show the total count over 2 field values. I am unable to incorporate a stats or eval function before the timechart function. Here is what my timecharts currently look like: And here is the respective XML code: Can you please help? Many thanks, Patrick
I am looking for “failed login for ADMIN detected” but because the time in Time is two years late it doesn’t alert. My log sample is: I also have _time 2020-02-23T23:02:20.000+01:00 My sea... See more...
I am looking for “failed login for ADMIN detected” but because the time in Time is two years late it doesn’t alert. My log sample is: I also have _time 2020-02-23T23:02:20.000+01:00 My search so far is:   index=abc sourcetype=def "Failed login for ADMIN detected" | rex field=_raw "(?ms)(?=[^c]*(?:cs2=|c.*cs2=))^(?:[^=\\n]*=){5}(?P<DatabaseEventDate>[^ ]+)" | stats count by duser cs1 cs2 DatabaseEventDate   This gives me a new field with the correct time: DatabaseEventDate 23.02.2022,13:11:39   How can I correct the timestamp without changing the props file (since the basics of the search works for another use case)? Please help!! Thanks in advance
Hi All   I want to ask if you know how to detect if someone change his mobile number on AD.   BR,
There are two environments, INT and PROD. The value of IREFFECTIVEDATE in INT is always the same, as is PROD, however they have different values. I want to know when the value of IREFFECTIVEDATE in i... See more...
There are two environments, INT and PROD. The value of IREFFECTIVEDATE in INT is always the same, as is PROD, however they have different values. I want to know when the value of IREFFECTIVEDATE in its environment changes. Here is a log sample: 2022-03-04 14:13:00.006, IREFFECTIVEDATE="2016-07-01 00:00:00.0", IRLOANRATE="5" So far my search is this: index= xy sourcetype=xy | eval env = if(host=="prod1", "PROD", "INT") | table IREFFECTIVEDATE IRLOANRATE env | head 1 | eval single_value="IREFFECTIVEDATE : ".IREFFECTIVEDATE." | IRLOANRATE : ".IRLOANRATE." | Environment : ".env" | fields single_value | sort 0 _time | streamstats current=f last(IREFFECTIVEDATE) as priorDate last(_time) as priorTime by env | where NOT (IREFFECTIVEDATE=priorDate) | mvcombine single_value delim=" " | nomv single_value Streamstats recognizes the changing value but it needs to be split by env. Any ideas please?