All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Gentlemen, How can i use eval  to assign a field  values of 2 different fields ? In my events, i have 2 fields:  empID and Non-empID .  I want eval to create a new field called identity  and this... See more...
Gentlemen, How can i use eval  to assign a field  values of 2 different fields ? In my events, i have 2 fields:  empID and Non-empID .  I want eval to create a new field called identity  and this should have the value of either empID OR Non-empID whichever is present .  Hope i am clear I tried       eval identity = coalesce (empID , Non-empID )        but this didn't work. Any  suggestions ?  Any other way to get this done if eval doesn't do it ? Eventually i am going to have a table as follows, and the Identity column should consolidate empID / Non-empID whichever is present for that employee record. identity First Last email Emp ID       Non-Emp ID        
Is it possible to configure  HF's inputs.conf for data transferring from UF? Please provide some Splunk documentation
Why can't the free trial Splunk Cloud accounts not access the REST API?
Hi  I have a requirement like on like on the certain appName in below table the drilldown panel should show some message like "as it is lite application it is not applicable" instead of "No results... See more...
Hi  I have a requirement like on like on the certain appName in below table the drilldown panel should show some message like "as it is lite application it is not applicable" instead of "No results" lite apps are: FGS,CMS,abd Onclick of success/failure count for the lite app the drilldown panel should show the above message appName success failure abd 1 12 profile 5 13 FGS 7 14 DSM 9 15 CMS 3 12   Thanks in advance
ダッシュボードスタジオで出力ルックアップを実行すると、セキュリティエラーが発生します。これを許容する方法を教えてください。
i have the following in a statistical table on a dashboard index=* <do search> | dedup B C | table _time B C D E F J | sort-_time I would like to have a count at the end of each row telling how m... See more...
i have the following in a statistical table on a dashboard index=* <do search> | dedup B C | table _time B C D E F J | sort-_time I would like to have a count at the end of each row telling how many it deduped.
I am looking to export the results of a Splunk search that contains transforming commands.  When I run the same search in the web GUI the live results "hang" on 50,000 stats, but once the search is c... See more...
I am looking to export the results of a Splunk search that contains transforming commands.  When I run the same search in the web GUI the live results "hang" on 50,000 stats, but once the search is complete it shows more than 300,000.  (screenshots provided below) Using the Splunk API, I want to export all results in a .json format, and I only want to view the final results; I do not want to view the results as they are streamed In essence I want to avoid the API returning any row where: "preview":true What am I missing?   While performing search Finished results                 Using python 3.9's  requests, my script contains the following:       headers={'Authorization': 'Splunk %s' % sessionKey} parameters={'exec_mode': "oneshot", 'output_mode':output_type, 'adhoc_search_level':'fast', 'count':0} with post(url=baseurl + '/services/search/jobs/export',params=parameters, data=({'search': search_query}), timeout=60, headers=headers, verify=False, stream=True) as response:      
I am indexing email data that Splunk reads from an inbox folder (via TA-mailclient). Those emails contain a csv file that comes as file attachment to the email.  Below is an example where the field... See more...
I am indexing email data that Splunk reads from an inbox folder (via TA-mailclient). Those emails contain a csv file that comes as file attachment to the email.  Below is an example where the field name of the attachment is file_content and the field value is below:     Stopped by Reputation Filtering,Stopped as Invalid Recipients,Spam Detected,Virus Detected,Stopped by Content Filter,Total Threat Messages,Clean Messages,Total Attempted Messages 9.28068485506,0.0,45.1350500141,0.00114191624597,1.53311465023,55.9499914356,44.0500085644,-- 251946,0,1225297,31,41620,1518894,1195841,2714735       I want to be able to manipulate the results to look like below: Stopped by Reputation Filtering Stopped as Invalid Recipients Spam Detected Virus Detected Stopped by Content Filter Total Threat Messages Clean Messages Total Attempted Messages 9.280684855 0 45.13505001 0.001141916 1.53311465 55.94999144 44.05000856 -- 251946 0 1225297 31 41620 1518894 1195841 2714735   Can someone please advise how to achieve this ?
I have the following log that Splunk is not recognizing well : msg=id=123342521352 operation=write   How can I write a query so that ID is parsed by itself? The goal is to be able to build a ... See more...
I have the following log that Splunk is not recognizing well : msg=id=123342521352 operation=write   How can I write a query so that ID is parsed by itself? The goal is to be able to build a table like : id operation 123342521352 write   Unfortunately, when I tried table id operation The id is always empty as it does not seem to be parsed correctly
Hi, Long time reader, first time poster.  I've cobbled together this query that generates a count by status for last week, and the week before, I would like to add a PercentageChange Column. ... See more...
Hi, Long time reader, first time poster.  I've cobbled together this query that generates a count by status for last week, and the week before, I would like to add a PercentageChange Column.   index="my_index" container_label=my_notables container_update_time!=null earliest=-14d@w0 latest=@w0 | fields id, status, container_update_time | eval Time=strftime(_time,"%m/%d/%Y %l:%M:%S %p") | eval container_update_time_epoch = strptime(container_update_time, "%FT%T.%5N%Z") | sort 0 -container_update_time | dedup id | eval status=case((status="dismissed"), "Dismissed (FP)",(status="resolved"), "Resolved (TP)",true(), "Other") | eval marker=if(relative_time(now(),"-7d@w0")<container_update_time_epoch,"WeekReporting", "PriorWeek") | eval _time=if(relative_time(now(),"-7d@w0")<container_update_time,container_update_time_epoch, container_update_time_epoch+60*60*24*7) | chart count by status marker I know I need to incorporate the following eval somehow, just not sure how to tie it all together to get it to show up in the format shown above. | eval PercentChange= if(PriorWeek!=0,(WeekReporting-PriorWeek)/PriorWeek*100,WeekReporting*100) I'll be honest I'm not sure If I still need the final eval, so any other suggestions that will make this more efficient I'll gladly accept. I appreciate any and all tips or help to make this work. Cheers, Michael
Hello All, Can someone please help me understand the functionality of AUTO_KV_JSON? Does it modify the functionality of KV_MODE? If so, under what scenario? From my experiments it doesn't seem ... See more...
Hello All, Can someone please help me understand the functionality of AUTO_KV_JSON? Does it modify the functionality of KV_MODE? If so, under what scenario? From my experiments it doesn't seem to do anything when traversing the permutations of INDEX_EXTRACTIONS, KV_MODE and AUTO_KV_JSON. We typically just use the following settings in our sourcetype definitions: KV_MODE=json AUTO_KV_JSON=true   Here is the table of results Appreciate a response. Thank you.  
I'm trying to extract a report for devices in my network. Home assistant sends a log record with a value of 1 when a device is present and 0 when it's not, but sometimes it loses the record of the de... See more...
I'm trying to extract a report for devices in my network. Home assistant sends a log record with a value of 1 when a device is present and 0 when it's not, but sometimes it loses the record of the devices. On the sample data below, I need to find the first (value=0) occurrence after the last (value=1) for each "Friendly Name" per day. Can someone help me with this, please ? TimeStamp Friendly Name value 9/3/22 12:48 User B 0 9/3/22 11:58 User B 0 9/3/22 10:32 User B 1 9/3/22 10:27 User B 0 9/3/22 7:44 User B 1 9/3/22 7:22 User B 1 9/3/22 0:15 User B 0   In this case I need the second record.
SOURCE CODE | eventstats count(eval(errorCount=0)) AS passed, count(shortVIN) AS total | timechart span=1w@w0 eval((passed/total)*100) AS percentPassed   I get a message saying "expression has no f... See more...
SOURCE CODE | eventstats count(eval(errorCount=0)) AS passed, count(shortVIN) AS total | timechart span=1w@w0 eval((passed/total)*100) AS percentPassed   I get a message saying "expression has no fields" on the timechart. I am unsure which expression to use to display week 1's percentage, then week 2's percentage, and so on.    Any help is appreciated
This is my splunk query to find out the percentage usage of devices under testing in our lab within a specific time range. I'm able to get the output as expected but getting issue in using the select... See more...
This is my splunk query to find out the percentage usage of devices under testing in our lab within a specific time range. I'm able to get the output as expected but getting issue in using the selected same dropdown value in dashboard to feed it to two different fields which I'm using in search query from two different indexes. Resources{}.ReservationGroups and reservation_group are two different fields from two different indexes but both of them collect same values and I'm trying to feed this specific value from dropdown i.e "iOS1" to both the fields in order to get result for device utilization for particular reservation group. similarly If someone select any other value from dropdown it should be able to be feed into these 2 fields and provide the result accordingly.   index=X Resources{}.Agent.AgentName=agent* Resources{}.ReservationGroups=iOS1 | dedup device_symbolicname | table device_symbolicname, Resources{}.ReservationGroups | stats count by Resources{}.ReservationGroups | sort Resources{}.ReservationGroups | appendcols [search index=Y scheduler_url="*sched*" is_agent=false reservation_group=iOS1  | rename location as DUT, reservation_group as ReservationGroup | dedup DUT | stats count by ReservationGroup | rename count as pp_count | sort ReservationGroup] | eval pct_usage=count/pp_count*100  
I have an add-on running on a heavy forwarder that is using the name of the HF as the  host.  I'm trying to change the host to something  more useful.  All of the events are of the sourcetype rubrik:... See more...
I have an add-on running on a heavy forwarder that is using the name of the HF as the  host.  I'm trying to change the host to something  more useful.  All of the events are of the sourcetype rubrik:*. Here is a sample event: {"_time": "2022-03-07T23:31:00.000Z", "clusterName": "my-host", "locationId": "XXXmaskedXXX", "locationName": "XXXmaskedXXX", "type": "Outgoing", "value": 0} I would like to use "my-host" as the host.  This is what I'm trying with no success. props.conf: [rubrik:*] TRANSFORMS-myhost = hostoverride transforms.conf: [hostoverride] DEST_KEY = MetaData:Host REGEX = \"clusterName\"\:\s\"(.+)\".+ FORMAT = host::$1  
We've been indexing logs from our Barracuda Web Security Gateway via our syslog server with a default sourcetype of syslog. It works ok but doesn't pull out all the fields and field extraction is hit... See more...
We've been indexing logs from our Barracuda Web Security Gateway via our syslog server with a default sourcetype of syslog. It works ok but doesn't pull out all the fields and field extraction is hit or miss, as the logs aren't consistent. I've tried the various Barracuda apps and TA's on splunkbase, both current and archived, with no success. Has anyone else solved this problem?
  index=testlab sourcetype=testcsv | rex field="status detail" "(?<message_received_name>Messages Received)\\s*[0-9,]*\s*[0-9,]*\s*(?<message_received>[0-9,]*)" | rex field=message_received mode=s... See more...
  index=testlab sourcetype=testcsv | rex field="status detail" "(?<message_received_name>Messages Received)\\s*[0-9,]*\s*[0-9,]*\s*(?<message_received>[0-9,]*)" | rex field=message_received mode=sed "s/,//g" | eval myInt = tonumber(message_received) | reverse | delta myInt as message_received_delta | timechart span=10m sum(message_received_delta) by Hostname   the problem i find is that when i am doing only 1 hostname at a time. it works just fine. (note the data is incremental counters only). but when i introduce additional hostnames, i see some hostnames would show a negative value. it should only show positive numbers (0 to inifinity) again when i do single host, it works just fine. really need help on this one.
We are having an issue with our new 8.2.2 splunk instance any time there's a subsearch with a lot of data being searched (1,500,000+ events). It's any subsearch at all, whether it's a join, append, o... See more...
We are having an issue with our new 8.2.2 splunk instance any time there's a subsearch with a lot of data being searched (1,500,000+ events). It's any subsearch at all, whether it's a join, append, or regular subsearch. We get live results as the search is running, but when it finishes we get this error:   StatsFileWriterLz4 file open failed file=D:\Program Files\Splunk\var\run\splunk\srtemp\555759916_17936_at_1646839312.2\statstmp_merged_44.sb.lz4   The search job has failed due to an error. You may be able view the job in the Job Inspector   We observed this "srtemp"  directory while running a search live and we see a directory for the job being created (like the 555759916_17936_at_1646839312.2 above) with a bunch of temp files being populated inside of that. When smaller searches are finished, the directory and all of it's contents are successfully deleted and we get results as expected. With larger searches we get the error above and the folder is left behind, but all of the temp files inside the directory are successfully deleted. We have 9TB of free space on the drive the directory is in, so we definitely aren't running out of space.  We have an old splunk instance (7.3.0) that does not have this issue at all. In fact, when observing the srtemp directory, nothing is created at all. Clearly there is some key difference we are missing but we are not sure what. We've tried increasing various limits in limits.conf and tried switching the journal compression from Lz4 to GZip and nothing has worked. We are stumped and not sure what to do next. None of the internal logs tell us anything more than what the error in the search says. Any sort of insight on what to do next would be greatly appreciated!
Hi, Can anyone tell me the version of MySQL that's used by the console, controller and EUM servers in the latest version for On-Premise (21.4.12.24706). Can I also check what version of Java is inc... See more...
Hi, Can anyone tell me the version of MySQL that's used by the console, controller and EUM servers in the latest version for On-Premise (21.4.12.24706). Can I also check what version of Java is included in the latest release please. Thanks  Maurice
I tried the following query: index=alldata Application="AZ" |eval Date=strftime(_time,"%m/%d/%Y %H:%M:%S %Z") |table Date user username |rename user as User, username as id |dedup id |appendcols [s... See more...
I tried the following query: index=alldata Application="AZ" |eval Date=strftime(_time,"%m/%d/%Y %H:%M:%S %Z") |table Date user username |rename user as User, username as id |dedup id |appendcols [search index=infor |fields disName userPrin |table disName userPrin |rename disName as Name userPrin as Mail |dedup Mail ] |fields Date User id Mail Name |eval "Login Status"=case(id==Mail, "Logged in", id!=Mail, "Never Logged in") |eval Result=if(id=Mail, "Mail", "NULL") I would like to create a column in the table that compares values in column id and Mail  and lists unique values (non duplicate).