All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello. I need help solving this. I have the UF installed on RHEL Server 7.9. Underneath that server is a RHEL 7.9 machine. This machine does not have the UF installed, is not connected to the domain.... See more...
Hello. I need help solving this. I have the UF installed on RHEL Server 7.9. Underneath that server is a RHEL 7.9 machine. This machine does not have the UF installed, is not connected to the domain. Is only connected to the Server through the NIC. All of the machines logs are forwarded to the Server through rsyslog. Then the Server with the UF installed, forwards both machines logs to the Splunk server. Everything works great. Both of these machines have ClamAV installed. I need to be able to see the machines clamav defs in the Splunk dashboard. How can I do that?
After using multiple append=t and prestat=t I am unable to use stats to capture the data into one nice line, as one of the tstat data might be late. Is it possible to get Splunk to take the last va... See more...
After using multiple append=t and prestat=t I am unable to use stats to capture the data into one nice line, as one of the tstat data might be late. Is it possible to get Splunk to take the last value (if it does not exist) of each of the columns and place it at the end.      | mstats append=t prestats=t min("mx.service.status") min(mx.service.dependencies.status) min(mx.service.resources.status) min("mx.service.deployment.status") max("mx.service.replicas") WHERE "index"="metrics_test" service.type IN (agent-based launcher-based) AND mx.env=http://mx20267vm:15000 span=10s BY "service.name" "service.type" | mstats append=t prestats=t max("mx.service.replicas") WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 service.type IN (agent-based launcher-based) span=10s BY service.name expected.count | mstats append=t prestats=t min("mx.service.deployment.status") max("mx.service.replicas") WHERE "index"="metrics_test" service.type IN (agent-based launcher-based) AND mx.env=http://mx20267vm:15000 span=10s BY "service.name" "service.type" forked | rename service.name as Service_Name,service.type as Service_Type     In the below image you can see in orange for this time 13:51:30, that only some of the data arrived at that time. The issue is if I do a stats on that and take the 13:51:30 "Status_numeric" + "Dependencies" are blank. I have tried streamstats and it kind of works but in this case (below), Deployment did not get a value. Also, i don't know how to get forked and Expected to the last time stamp...any help would be great thanks   
Hello @alacercogitatus  Google Workspace for Splunk addon throws an error Installed Add ons "Google Workspace for Splunk" configured google workspace account, able to see the users list in splunk... See more...
Hello @alacercogitatus  Google Workspace for Splunk addon throws an error Installed Add ons "Google Workspace for Splunk" configured google workspace account, able to see the users list in splunk, "Admin SDK Reports Ingest" configuration throws below error for service "login" and "user_account" error_message="invalid literal for int() with base 10: '1636719173.276311'" error_type="<class 'ValueError'>" error_arguments="invalid literal for int() with base 10: '1636719173.276311'" error_filename="google_client.py" error_line_number="863" input_guid="840262-6716-ff73-e5c-c0816800774" input_name="Account"
Hi all, What would be a simply approach to creating an alert based on the following log data: The objective is to send an alert if the "Return Code" does not equal the number "1" # Reporting Start... See more...
Hi all, What would be a simply approach to creating an alert based on the following log data: The objective is to send an alert if the "Return Code" does not equal the number "1" # Reporting Started # ##################### # Processing task 1 # Processing task 2 # Processing task 3 ##################### # Return Code 1 TIA    
Hi all, I have a problem that I cannot solve. I have data that is a result of a loadjob where the fields are named 0_PREVIOUS_MONTH, 1_PREVIOUS_MONTH, 2_PREVIOUS_MONTH, ..... 12_PREVIOUS_MONTH. I ... See more...
Hi all, I have a problem that I cannot solve. I have data that is a result of a loadjob where the fields are named 0_PREVIOUS_MONTH, 1_PREVIOUS_MONTH, 2_PREVIOUS_MONTH, ..... 12_PREVIOUS_MONTH. I would like to add the values of the fields starting from 1/4 up to the current month .. Let me explain with an example: today we are in November, I need the sum of a line that starts from April 1st until today. So if I do the current month -4 + 1 = 8, I have to add: 4_PREVIOUS_MONTH + 5_PREVIOUS_MONTH + .... + 11_PREVIOUS_MONTH which is exactly 8 months. I thought of a foreach with this syntax: | foreach * _PREVIOUS_MONTH [eval TOTAL = TOTAL + if (* _ PREVIOUS_MONTH> = 4, <<FIELD>>, 0)] but it does not work. You can help me? I'm going crazy to find a solution Tks Bye Antonio
I am getting success percentage from the query as 97.00% and my requirement is to add an alert when success percentage is below 95.00% i am getting success % from below query please suggest t... See more...
I am getting success percentage from the query as 97.00% and my requirement is to add an alert when success percentage is below 95.00% i am getting success % from below query please suggest the query to add an alert when successrate is 95.00% in one hour span
Hello community, My client has experienced a severe issue on a Search Head Cluster on past days due to the scheduler behavior. We had a scheduled search to long to run between to schedules that was... See more...
Hello community, My client has experienced a severe issue on a Search Head Cluster on past days due to the scheduler behavior. We had a scheduled search to long to run between to schedules that was having concurrent jobs running (although its concurrency_limit =1). After a while, the scheduler started a burst of deferred (up ot 26k defer by minute for ~600 unitary savedsearches). The particulary strange behavior reside for me in the concurrency_limit set on deferred schedules during the burst: 408 instead of usual 1. (408 corresponds to the maximum search concurrency of the SHC (4 SH x102)) The burst terminated by itself after a while. We experienced several other burst in lower proportion, the big episodes have disapeared after the correction of the scheduled search previously mentionned. Do you guys have any idea about the reason of the concurrency _limit change on the fly ? (no change performed by human) The graph of deferred events by concurrency _limit - concurrency _limit=408 is on an overlay to see the global behavior with other values. index="_internal" AND sourcetype=scheduler AND host=<MySHMaster> status="continued" earliest=-72h | timechart span=1min count by concurrency_limit Regards,
I am using below query, index=A sourcetype IN (Compare,Fire)| fillnull value="" | search Name="*SWZWZQ0001*" OR Name="*SADAPP0002*" OR Name="*SALINU0016*" OR Name="*SGGRNP1002*" | stats values(*) ... See more...
I am using below query, index=A sourcetype IN (Compare,Fire)| fillnull value="" | search Name="*SWZWZQ0001*" OR Name="*SADAPP0002*" OR Name="*SALINU0016*" OR Name="*SGGRNP1002*" | stats values(*) as * by sysid |eval Status=if(F_Agent_Version ="" AND C_Agent_Version ="","Not Covered","Covered") | table sourcetype sysid Name F_Agent_Version C_Agent_Version Status   sourcetype ITAM_sysid ITAM Name Fire Agent Version Compare Agent Version Status Compare      Fire 0003fb SALINU0016 32.30. 6.3 Not Covered Compare                    Fire 003fcb SGGRNP1002 29.7   Not Covered Fire 0d456 SADAPP0002 32.3   Covered Compare 0d526 SWZWZQ0001     Not Covered  Due to the null's in the first and second rows (SALINU0016,SGGRNP1002) for Agent_version and Compare Agent Version , i am getting not covered instead of covered.Please let me know ,how to get rid of nulls and make the status Covered .
Splunk app for AWS - config loading issue  Using enterprise and app for aws both are latest versions . Please find the screenshot below. Help me to resolve the issue   
hi I use a basic search in order to count the number of incidents by town index=toto sourcetype=tutu | stats dc(id) by site Now I would be able o display this results on a map in order to have a ... See more...
hi I use a basic search in order to count the number of incidents by town index=toto sourcetype=tutu | stats dc(id) by site Now I would be able o display this results on a map in order to have a bubble with the number of incidents for each site So I have created a lookup (gps.csv) like this site,Longitude,Latitude, AGDE,3.4711992,43.3154 NANTES,-1.58295,47.235197 TOULOUSE,1.3798,43.6091 So what I have to for doing a cross between my search and my lookup in order to have a bubble count on my map vizualisation? thanks  
Hello All,  How can I remove words and characters from a multivalued field without using REX? I have a filed named OS OS: Windows-2016 Windows-2010 How can I take out everything that co... See more...
Hello All,  How can I remove words and characters from a multivalued field without using REX? I have a filed named OS OS: Windows-2016 Windows-2010 How can I take out everything that comes in before the hyphen and just end up with the below? OS: 2016 2010
Hi , Like below , Sourcetype =Fire Name                   OS  Compare_Version Compare_Agent Installed sysid ABC11         windows  10.1    2.2 qweq Sourcetype =Compare Name ... See more...
Hi , Like below , Sourcetype =Fire Name                   OS  Compare_Version Compare_Agent Installed sysid ABC11         windows  10.1    2.2 qweq Sourcetype =Compare Name OS  Fire_Version Fire_Agent Installed sysid             After doing , index=A sourcetype IN (Compare,Fire) | stats values(*) as * by sysid  | mvexpand Name | stats values(*) as * by Name Since the sourcetype =Compare   has empty row for particular sysid or Name , i am not getting exact output.Its null ,so i need to fill null value="" in the sourcetype which has no rows and the same is required for other scenario too(when sourcetype=Fire has no data and sourcetype =Compare has data). Please let me know a search which accomodates this too .
Hi All, I have a drop down with values as below. All Task1_a Task1_b Task1_c Task2_a Task2_b Task2_c I want to have other 2 options in the dropdown as Task1_all and Task2_all, in which when ... See more...
Hi All, I have a drop down with values as below. All Task1_a Task1_b Task1_c Task2_a Task2_b Task2_c I want to have other 2 options in the dropdown as Task1_all and Task2_all, in which when Task1_all is clicked it should show the values of Task1_a, Task1_b and Task1_c, also when Task2_all is clicked it should show values of Task2_a, Task2_b and Task2_c. Is there any solution so that i can include these 2 options in my dropdown along with the existing options?
 I have 2 types of events that come in the following, random, format: AAAAAAABAAAAAABAAAAAAAAABAABAAA B's never repeat, and they are always surrounded by an A. The A prior to B has source informat... See more...
 I have 2 types of events that come in the following, random, format: AAAAAAABAAAAAABAAAAAAAAABAABAAA B's never repeat, and they are always surrounded by an A. The A prior to B has source information, and the A after the B has result/destination information. The information in B has command information. I am trying to compare the values of fields in the A events both before and after the B, as well as correlate with what the B event contains. Pseudo example: Event A: mac_address, ip_address,  new_session_flag Event B: mac_address, new_ip_address Event A: mac_address, ip_address, new_session_flag I need to know the source IP address (a.ip_address before), the IP address it was attempted to move to (b.ip_address), and the resulting IP address (a.ip_address after) and session flag status (a.new_session_flag after) How can i do this? i am working w/ millions of records so i cannot use append/join/etc. Example of a search where the new_session_flag begins with 0 and ends with 1, but I dont want to filter based on new_session_flag. I want all events where B is the 2nd event, regardless of new_session_flag status     (event=A OR event=B) | transaction mac_address startswith=new_session_flag=0 endswith=new_session_flag=1 maxevents=3 unifyends=true mvlist=true | fields event mac_address new_ip_address ip_address new_session_flag | where mvindex(event,1) == "B"      
Hi all, I am new to Splunk and have been trying to work on a use case to detect anomalous switches from one type of account to another. Index A: Has the list of switches i.e. has two columns: 'Old ... See more...
Hi all, I am new to Splunk and have been trying to work on a use case to detect anomalous switches from one type of account to another. Index A: Has the list of switches i.e. has two columns: 'Old account', 'New account'. Index B: Has the *type* of accounts. It has two columns: 'Accounts', 'Account_types'. Till now, using commands like join (after renaming certain columns), I have been able to get to a point where I have a table of 4 columns, 'Old account', 'Old_account_type', New account', 'New_account_type'. Aim: I need to implement logic to detect if old accounts switch to 'unusual' new accounts**.** Idea so far: I wish to create a dictionary of some sort where there is a list of new accounts and new_account_type(s) an old account has switched to. And then, if the old account switches to an account not in this dictionary, I wish to flag it up. Does this sound like a logical idea? For example, if looking at past 4 switches, if an old account named A of the type 'admin', switches to new accounts named 1, 2, 3, 4 of type admin, user, admin, admin, then the dictionary should look like A_switches = { "Old Account": "A", "old_account_type":"admin", "New Account": [1 , 2 , 3, 4], "type": [admin, user] } This query needs to be run each hour to flag up unusual switches. Can someone suggest how I can implement the above logic i.e. create a dictionary and spot unusual activity? Apologies for the long question and if something isn't clear.
I am trying to search through transactions and check their response codes so that we can determine a percentage of failed/declined transactions. However, based on the fact that transactions could be ... See more...
I am trying to search through transactions and check their response codes so that we can determine a percentage of failed/declined transactions. However, based on the fact that transactions could be limited to 5-10 per hour or could go as high as 1000 per hour, I need a way to check every 100 events/transactions, how many were approved and how many were declined. I have not found a way to search for the last 100 while ignoring the time period, i.e. if i search for the last 5 minutes for 100 transactions/events it may only return 2, I need it to go past the 5 minutes and find the last 100 transactions. If i increase the search time to 30 minutes, it may find 100 but there could be 1000, and this is not an accurate reflection of the percentage of approved/declined transactions
Hi,   I need to send logs from a Django REST API to Splunk via Syslog protocol. I am currently facing connection issues with the host and port I am using. Here is the code I am using: My c... See more...
Hi,   I need to send logs from a Django REST API to Splunk via Syslog protocol. I am currently facing connection issues with the host and port I am using. Here is the code I am using: My current output is as follows:   Can someone please advise me on what port or host should be used to send such logs into Splunk? Do I need to know what indexer I will be storing these logs on also? Thanks  
Hi Team, I want to consult with you about the following situation: I setup an email alert for detecting a specific performance metric of one type of machine (config=A). The alert will raise when it ... See more...
Hi Team, I want to consult with you about the following situation: I setup an email alert for detecting a specific performance metric of one type of machine (config=A). The alert will raise when it detect the latest run value is regressed >5% than the last run value of the same type of machine (config=A). However, this alert can only detect this for one machine (config=A). If we need to track many other machines (config=A, B, C, D), each one need an alert setup like this since each type of machine's value can only be compared with itself, which is very cumbersome considering we also need to monitor other performance metrics for all machines.  Do we have a better way to create generalized these alerts into one for this case? Say an alert can loop all type of machines, fetch and compare a specific performance metrics and raise alert accordingly?
Hi -  I have been not having much luck creating what I need. I am looking for the best way to display the percentages of a field's values. For instance      index=foo |stats count by IP     ... See more...
Hi -  I have been not having much luck creating what I need. I am looking for the best way to display the percentages of a field's values. For instance      index=foo |stats count by IP     and the results might be  IP count percentage 10.10.10.1 12 .60 10.10.10.5 1 .05 10.10.10.8 7 .35   I am looking for a clean and efficient way to calculate the percentages, in this case, for the occurrence of an IP for a given time in a search.  I will be using it in an ML density function model, so any other suggestion appreciated as well. Please let me know if you have a suggestion. Thank you
After recently reviewing 8.2.3 hardware requirements, I noticed my deployment is a bit under spec. For instance, Splunk recommends 800 IOPs and 300GB for Search Head node disks. https://docs.splunk... See more...
After recently reviewing 8.2.3 hardware requirements, I noticed my deployment is a bit under spec. For instance, Splunk recommends 800 IOPs and 300GB for Search Head node disks. https://docs.splunk.com/Documentation/Splunk/8.2.3/Capacity/Referencehardware#What_storage_type_should_I_use_for_a_role.3F Search heads with a high ad-hoc or scheduled search loads should use SSD. A HDD-based storage system must provide no less than 800 sustained IOPS. A search head requires at least 300GB of dedicated storage space. Indexers should SSD or NVMe drives.  In my case, I have a dedicated NVMe "data" drive for all indexed data (except _internal) and a SSD drive for the OS and Splunk application (like on the Search Heads).      Does an indexer require the same 800 IOP / 300GB disk as a Search Head?  Does an indexer need to write information to disk per search execution?    Also has anyone experienced issues with using the GP3 disks on SHCs or IDXCs ?  (excluding the use as a specific data drive, which I use NVMe)   Thank you