All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,    I have recently upgraded from Splunk 7 to Splunk 8.2.4. After the upgrade, I noticed that some transform commands such as chart or stats do not work in smart and fast mode.   For in... See more...
Hello,    I have recently upgraded from Splunk 7 to Splunk 8.2.4. After the upgrade, I noticed that some transform commands such as chart or stats do not work in smart and fast mode.   For instance: index=main | chart count by host returns the expected results in detailed mode. It returns 0 results in smart and fast mode.   Ps: The transaction command still works, but I have to select the fields I want with fields in place of table. In Splunk 7 table works too.   I would like that stats and chart commands still work in fast search mode, as it happened in Splunk 7. Could you help me to revert the Splunk 7 working mode? Thank you very much Kind Regards Marco
Hi All, We have a python code to ingest MongoDB logs into splunk and we are successfully ingesting logs from old servers. Now there is a requirement to ingest mongodb logs into splunk from new se... See more...
Hi All, We have a python code to ingest MongoDB logs into splunk and we are successfully ingesting logs from old servers. Now there is a requirement to ingest mongodb logs into splunk from new servers. mongodb://USER:PASS@SERVER1:27017,SERVER2:27017/abc_analytics?replicaSet=mongo-replica</description> This is how logs are ingested, now when I try the same for new servers, I get "Invalid Key error" NOTE: 1) Firewall connectivity is working fine 2) MongoDB team says the password is correct The password that is used, is that given by splunk team or the mongodb team? If it is MongoDB team, where they need to check the password and the user id? internal logs: 02-17-2022 18:45:50.916 +1100 WARN Application - Invalid key in stanza [abc_analytics://XXX-XXX-XXX] in /opt/splunk/etc/deployment-apps/modinput_abc_analytics_mongodb-XXX-XXX-XXX/local/inputs.conf, line 34: mongodb_uri (value: mongodb://Mongodbservername1.local:27017,Mongodbservername2.local:27017/abc_analytics?replicaSet=mongo-replica).\n
Hello All, I was extracting some volume data for PE testing from prod systems, using following query  I am expecting to get stats from 9AM to 6PM event counts with respect to proxy names. but fol... See more...
Hello All, I was extracting some volume data for PE testing from prod systems, using following query  I am expecting to get stats from 9AM to 6PM event counts with respect to proxy names. but following code creating stats for entire day please help me to remove these extra data.  Query  index= index_Name environmentName= Env_name clientAppName="App_Name" | eval eventHour=strftime(_time,"%H") | where eventHour<18 AND eventHour>=9 | timechart count span=60m by proxyName result : TIme Proxy1 proxy2  2022-02-16 06:00 0 0 2022-02-16 07:00 0 0 2022-02-16 08:00 0 0 2022-02-16 09:00 27 34
Could you please tell me about the following? If I want to limit memory usage for a search, is it correct to think that I should set the following? ===== [search] enable_memory_tracker=true searc... See more...
Could you please tell me about the following? If I want to limit memory usage for a search, is it correct to think that I should set the following? ===== [search] enable_memory_tracker=true search_process_memory_usage_threshold=10000 search_process_memory_usage_percentage_threshold=60 ===== ※ If either value of "10000 (MB)" or "60 (%)" is reached, the operation is forcibly terminated. Is it correct to understand that the above setting is for all searches including ad hoc searches? If I want to enable the settings for all app searches, is it safe to add them to limits.conf below? $SPLUNK_HOME/etc/system/local/limits.conf ※Set to $SPLUNK _ HOME/etc/apps/App name/local/limits.conf to search for individual apps. Am I correct in thinking that the above limits.conf settings should be set for both SearchHead and Indexer?
Hello,  I am looking at creating a dashboard which shows us the least visited domains in the last 30 days. I also want to setup a email alert every hour of the most and least visited domain. Unfort... See more...
Hello,  I am looking at creating a dashboard which shows us the least visited domains in the last 30 days. I also want to setup a email alert every hour of the most and least visited domain. Unfortunately, I don't have any domain tools / applications installed and want to create a search for the time being.  Usecase:  URL Count in the last 30 days within a certain location Only see the top level domains Setup email Alert every hour I have currently got this setup but it doesn't work properly: index=proxy OR index=web gateway src_country="country" | regex url="(?<TLD>\.\w+?)(?:$|\/)" | search ([|inputlookup of users.csv]) | stats count as Total by url   Thanks, Mark Nicholls
    index="***********" sourcetype="**********" (host="*") | rex field=_raw "(Available Updates)\s+(?<AvailableUpdates>.+)" | table _time _raw host AvailableUpdates | stats latest(AvailableUpdate... See more...
    index="***********" sourcetype="**********" (host="*") | rex field=_raw "(Available Updates)\s+(?<AvailableUpdates>.+)" | table _time _raw host AvailableUpdates | stats latest(AvailableUpdates) as AvailableUpdates by host   Hey guys. So I have a search that gives a table as such: Host __________________ AvailableUpdates Host1_________________ = 21 Host2__________________= 0 Host3__________________= 5 Host4__________________= 0 Host5__________________ null I am looking to make a piechart with 2 different "values" 1 "value" is all the "= 0" in green, and the rest in red. Can't quite figure out how to sort this.  Tyvm
Hi all, I want a result containing value= '0' in column without using the " chart " command Thank you.    
Hi,  I did an alert that should run every day at the same time, at the end of the alert I used "collect" ->      | collect index="index_name"        every day the job is running (it takes 1~ ... See more...
Hi,  I did an alert that should run every day at the same time, at the end of the alert I used "collect" ->      | collect index="index_name"        every day the job is running (it takes 1~ min) but I don't see the new events after the job is finished...  how long is it supposed to take until I will see it on the index?   this is the search I do (with filter last 24h)  ->       index="index_name"        
Hello everyone, I'm still very new to the world of Splunk Enterprise. I hope that you can help me with my problem. I created the following search to be notified of app updates by email: | re... See more...
Hello everyone, I'm still very new to the world of Splunk Enterprise. I hope that you can help me with my problem. I created the following search to be notified of app updates by email: | rest /services/apps/local | search update.version != "" | rename title AS Update_APP, version AS Update_Version, update.version AS Update_Versionupdate | table Update_APP Update_Version Update_Versionupdate The notification type is scheduled to run every day at 12:00 p.m. I chose one as a trigger. However, I get the same ban notification email every day, even though I've already received it. What do I have to do so that the message is only sent once. Please excuse my bad English. Best regards Björn
Hi All, We want to ingest ZAP(Zero-hour auto purge) logs into Splunk. We are using Splunk Add-on for Microsoft Office 365 app. So, is there a way to ingest ZAP(Zero-hour auto purge) logs using t... See more...
Hi All, We want to ingest ZAP(Zero-hour auto purge) logs into Splunk. We are using Splunk Add-on for Microsoft Office 365 app. So, is there a way to ingest ZAP(Zero-hour auto purge) logs using the mentioned Splunk App. If not is there any other Splunk app that would help us in ingesting ZAP(Zero-hour auto purge) logs. Please help me with the above information.
Hello, I'm encountering the following issue on one of my indexers (from a total of 3) after downgrading from 8.3.3 to 8.1.6. All my other components (3SH,CM,MC,Deployer, Indexer2 and 3 are working ... See more...
Hello, I'm encountering the following issue on one of my indexers (from a total of 3) after downgrading from 8.3.3 to 8.1.6. All my other components (3SH,CM,MC,Deployer, Indexer2 and 3 are working fine after downgrading.) I tried pretty much everything to kill the process, restart splunk, restart the instance on the cloud, nothing seems to help. splunkd is not running.
Hi, In all snapshots > Business Transaction is showing as (Not Found(id:350432)) and all calls are shown as "stall". I have been able to disable the stall and enable it again after 10 minutes. ... See more...
Hi, In all snapshots > Business Transaction is showing as (Not Found(id:350432)) and all calls are shown as "stall". I have been able to disable the stall and enable it again after 10 minutes. any idea how I can fix this? ^ Post edited by @Ryan.Paredez for minor formatting
My requirement is to get the rate of change of a certain parameter if its corresponding alert gets triggered. To add more details, we have a log file that logs the backlog of database. Once the back... See more...
My requirement is to get the rate of change of a certain parameter if its corresponding alert gets triggered. To add more details, we have a log file that logs the backlog of database. Once the backlog crosses a certain threshold we trigger an alert, however it could be a false positive since the system may be undergoing maintenance and hence backlog grows. So I want the alert to trigger another query that captures rate of growth over the last 'x' hours. This will give more context about what is happening in the system. How to achieve this in splunk? Please share your ideas
外れ値グラフで外れ値が検出されたときにダッシュボードを赤に変える あなたが方法を知っている場合は、私に知らせてください。
Hello everyone, I'm pretty new to Splunk and mostly learning as I go, so please bear with me if this is a common question or an easy answer as I'm still figuring out alot of things I'm bu... See more...
Hello everyone, I'm pretty new to Splunk and mostly learning as I go, so please bear with me if this is a common question or an easy answer as I'm still figuring out alot of things I'm building a specific search string that will seperate 1 field of information, with 5 different unique field names, counting them, and mapping this data to build a trending chart. Our data is pulled in on a daily basis. My search query works so far (although it's probably not optimized), and I'm now moving forward into the formatting stage. What I want is to ensure my chart can work off of our main dashboard that has a time picker, so that we can see the trending of our data from day, month, year, etc. My query is working, but what I'm encountering is that in the chart the data will load in on a daily mapping no matter what filter is set. This is fine on a weekly, or daily filter, but when I want to view this with larger sets of data such as monthly or yearly, this comes out a bit messy. Is it possible to tweak the search string so that when the data is viewed with a monthly filter, it will give the the values from the month and put the highest amount on the chart instead of every day of the month? If not, I think the other solution may just be to make a separate chart for a monthly view. That's fine too, but just thought I would ask! Thank you in advance and screenshot is below  showing what I see when changing to a "monthly" view along with a snippet of the search string.     | stats count(eval(severity=="Low")) AS Low by _time | chart values(Low) over _time        
We have onboard a firewall log from Forcepoint, and they were not parsing properly in Splunk. We try to find add-on to ingest the log but we found none. Is there any way we can do to solved this issu... See more...
We have onboard a firewall log from Forcepoint, and they were not parsing properly in Splunk. We try to find add-on to ingest the log but we found none. Is there any way we can do to solved this issue. Here is example for our current fw log; Feb 17 10:25:09 172.XX.XX0.XX0 "2022-02-17 10:25:51","3350841932","172.XX.XXX.XXX","Packet Filtering","Notification","New connection","Allow","123.XXX.XXX.XX","113.XX.XXX.XXX","DNS (UDP)","17","52129","53","4372.39","123.XXX.XXX.XXX","17X.XXX.XXX.XX","52129","53",,"129",,,,,,,,,,,,,,"DC-Node-01",,"2097953.17",,,"2022-02-17 10:25:51","Firewall","Connection_Allowed",,,"6899901665942596693",,,,   Please advise.
If you don't put a wild card when searching after extracting the field, you can't search. Field extraction is successful and field verification is possible when searching in index and sourcetyp... See more...
If you don't put a wild card when searching after extracting the field, you can't search. Field extraction is successful and field verification is possible when searching in index and sourcetype.   myfield=aaaa -> no search myfield=*aaaa* -> search ok   It works like this in all fields of a specific index.
Summary: When using the table command, values are dropped if { is the first character.     index=someindex host="VVV" source=somesource earliest=-24h  action           NOT( AC... See more...
Summary: When using the table command, values are dropped if { is the first character.     index=someindex host="VVV" source=somesource earliest=-24h  action           NOT( ACTION ="SUMMARY" OR ACTION="RESULT")           | dedup ID         |rename ID as "Rcrds Prcssd To Date"           | rename EVENT_DT as "Date Time" EVENT as "API EVENT"           |convert ctime(_time) as RunDate timeformat="%m/%d/%Y %H:%M %p"           |table ID,RunDate,ACTION, "API EVENT"           |SORT -ID   When the "API EVENT" field has a { starting value, the remaining values are dropped. If I replace  |table ID,RunDate,ACTION, "API EVENT" with |fields ID,RunDate,ACTION, "API EVENT" I see the { and the remaining values for "API EVENT"   Why is the table comm, and dropping values?
Query: index=xxx source=Perfmon:LogicalDisk host=$h$ ( counter="Disk Reads/sec" OR counter="Disk Writes/sec" ) | eval read_ops=IF(counter="Disk Reads/sec",Value,0) | eval write_ops=IF(counter="Disk... See more...
Query: index=xxx source=Perfmon:LogicalDisk host=$h$ ( counter="Disk Reads/sec" OR counter="Disk Writes/sec" ) | eval read_ops=IF(counter="Disk Reads/sec",Value,0) | eval write_ops=IF(counter="Disk Writes/sec",Value,0) | eval tot_ops=write_ops+read_ops | fields read_ops write_ops tot_ops | timechart max(read_ops) max(write_ops) max(tot_ops) Need to sum the read_ops and write_ops into field total ops for each time interval (1 min) for a timechart. Because the writes ops and read ops values are in separate rows per time interval. example below: 2/16/22 5:29:59.000 PM   02/16/2022 17:29:59.224 -0500 collection=LogicalDisk object=LogicalDisk counter="Disk Writes/sec" instance=_Total Value=27.222955244825506 Collapse     2/16/22 5:29:59.000 PM   02/16/2022 17:29:59.224 -0500 collection=LogicalDisk object=LogicalDisk counter="Disk Reads/sec" instance=_Total Value=5.316598854323969 Collapse  
I would like to list results from two events that are linked via common field (system_id), but searched via value only found in one event. Event1: client phone_number request_type system_id ... See more...
I would like to list results from two events that are linked via common field (system_id), but searched via value only found in one event. Event1: client phone_number request_type system_id Event2: client_bank bank_request response_code system_id Both events share the same system_id, however, I only know the phone_number and need to use that to list both events. Any help would be greatly appreciated.