All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi I use a basic search in order to count the number of incidents by town index=toto sourcetype=tutu | stats dc(id) by site Now I would be able o display this results on a map in order to have a ... See more...
hi I use a basic search in order to count the number of incidents by town index=toto sourcetype=tutu | stats dc(id) by site Now I would be able o display this results on a map in order to have a bubble with the number of incidents for each site So I have created a lookup (gps.csv) like this site,Longitude,Latitude, AGDE,3.4711992,43.3154 NANTES,-1.58295,47.235197 TOULOUSE,1.3798,43.6091 So what I have to for doing a cross between my search and my lookup in order to have a bubble count on my map vizualisation? thanks  
Hello All,  How can I remove words and characters from a multivalued field without using REX? I have a filed named OS OS: Windows-2016 Windows-2010 How can I take out everything that co... See more...
Hello All,  How can I remove words and characters from a multivalued field without using REX? I have a filed named OS OS: Windows-2016 Windows-2010 How can I take out everything that comes in before the hyphen and just end up with the below? OS: 2016 2010
Hi , Like below , Sourcetype =Fire Name                   OS  Compare_Version Compare_Agent Installed sysid ABC11         windows  10.1    2.2 qweq Sourcetype =Compare Name ... See more...
Hi , Like below , Sourcetype =Fire Name                   OS  Compare_Version Compare_Agent Installed sysid ABC11         windows  10.1    2.2 qweq Sourcetype =Compare Name OS  Fire_Version Fire_Agent Installed sysid             After doing , index=A sourcetype IN (Compare,Fire) | stats values(*) as * by sysid  | mvexpand Name | stats values(*) as * by Name Since the sourcetype =Compare   has empty row for particular sysid or Name , i am not getting exact output.Its null ,so i need to fill null value="" in the sourcetype which has no rows and the same is required for other scenario too(when sourcetype=Fire has no data and sourcetype =Compare has data). Please let me know a search which accomodates this too .
Hi All, I have a drop down with values as below. All Task1_a Task1_b Task1_c Task2_a Task2_b Task2_c I want to have other 2 options in the dropdown as Task1_all and Task2_all, in which when ... See more...
Hi All, I have a drop down with values as below. All Task1_a Task1_b Task1_c Task2_a Task2_b Task2_c I want to have other 2 options in the dropdown as Task1_all and Task2_all, in which when Task1_all is clicked it should show the values of Task1_a, Task1_b and Task1_c, also when Task2_all is clicked it should show values of Task2_a, Task2_b and Task2_c. Is there any solution so that i can include these 2 options in my dropdown along with the existing options?
 I have 2 types of events that come in the following, random, format: AAAAAAABAAAAAABAAAAAAAAABAABAAA B's never repeat, and they are always surrounded by an A. The A prior to B has source informat... See more...
 I have 2 types of events that come in the following, random, format: AAAAAAABAAAAAABAAAAAAAAABAABAAA B's never repeat, and they are always surrounded by an A. The A prior to B has source information, and the A after the B has result/destination information. The information in B has command information. I am trying to compare the values of fields in the A events both before and after the B, as well as correlate with what the B event contains. Pseudo example: Event A: mac_address, ip_address,  new_session_flag Event B: mac_address, new_ip_address Event A: mac_address, ip_address, new_session_flag I need to know the source IP address (a.ip_address before), the IP address it was attempted to move to (b.ip_address), and the resulting IP address (a.ip_address after) and session flag status (a.new_session_flag after) How can i do this? i am working w/ millions of records so i cannot use append/join/etc. Example of a search where the new_session_flag begins with 0 and ends with 1, but I dont want to filter based on new_session_flag. I want all events where B is the 2nd event, regardless of new_session_flag status     (event=A OR event=B) | transaction mac_address startswith=new_session_flag=0 endswith=new_session_flag=1 maxevents=3 unifyends=true mvlist=true | fields event mac_address new_ip_address ip_address new_session_flag | where mvindex(event,1) == "B"      
Hi all, I am new to Splunk and have been trying to work on a use case to detect anomalous switches from one type of account to another. Index A: Has the list of switches i.e. has two columns: 'Old ... See more...
Hi all, I am new to Splunk and have been trying to work on a use case to detect anomalous switches from one type of account to another. Index A: Has the list of switches i.e. has two columns: 'Old account', 'New account'. Index B: Has the *type* of accounts. It has two columns: 'Accounts', 'Account_types'. Till now, using commands like join (after renaming certain columns), I have been able to get to a point where I have a table of 4 columns, 'Old account', 'Old_account_type', New account', 'New_account_type'. Aim: I need to implement logic to detect if old accounts switch to 'unusual' new accounts**.** Idea so far: I wish to create a dictionary of some sort where there is a list of new accounts and new_account_type(s) an old account has switched to. And then, if the old account switches to an account not in this dictionary, I wish to flag it up. Does this sound like a logical idea? For example, if looking at past 4 switches, if an old account named A of the type 'admin', switches to new accounts named 1, 2, 3, 4 of type admin, user, admin, admin, then the dictionary should look like A_switches = { "Old Account": "A", "old_account_type":"admin", "New Account": [1 , 2 , 3, 4], "type": [admin, user] } This query needs to be run each hour to flag up unusual switches. Can someone suggest how I can implement the above logic i.e. create a dictionary and spot unusual activity? Apologies for the long question and if something isn't clear.
I am trying to search through transactions and check their response codes so that we can determine a percentage of failed/declined transactions. However, based on the fact that transactions could be ... See more...
I am trying to search through transactions and check their response codes so that we can determine a percentage of failed/declined transactions. However, based on the fact that transactions could be limited to 5-10 per hour or could go as high as 1000 per hour, I need a way to check every 100 events/transactions, how many were approved and how many were declined. I have not found a way to search for the last 100 while ignoring the time period, i.e. if i search for the last 5 minutes for 100 transactions/events it may only return 2, I need it to go past the 5 minutes and find the last 100 transactions. If i increase the search time to 30 minutes, it may find 100 but there could be 1000, and this is not an accurate reflection of the percentage of approved/declined transactions
Hi,   I need to send logs from a Django REST API to Splunk via Syslog protocol. I am currently facing connection issues with the host and port I am using. Here is the code I am using: My c... See more...
Hi,   I need to send logs from a Django REST API to Splunk via Syslog protocol. I am currently facing connection issues with the host and port I am using. Here is the code I am using: My current output is as follows:   Can someone please advise me on what port or host should be used to send such logs into Splunk? Do I need to know what indexer I will be storing these logs on also? Thanks  
Hi Team, I want to consult with you about the following situation: I setup an email alert for detecting a specific performance metric of one type of machine (config=A). The alert will raise when it ... See more...
Hi Team, I want to consult with you about the following situation: I setup an email alert for detecting a specific performance metric of one type of machine (config=A). The alert will raise when it detect the latest run value is regressed >5% than the last run value of the same type of machine (config=A). However, this alert can only detect this for one machine (config=A). If we need to track many other machines (config=A, B, C, D), each one need an alert setup like this since each type of machine's value can only be compared with itself, which is very cumbersome considering we also need to monitor other performance metrics for all machines.  Do we have a better way to create generalized these alerts into one for this case? Say an alert can loop all type of machines, fetch and compare a specific performance metrics and raise alert accordingly?
Hi -  I have been not having much luck creating what I need. I am looking for the best way to display the percentages of a field's values. For instance      index=foo |stats count by IP     ... See more...
Hi -  I have been not having much luck creating what I need. I am looking for the best way to display the percentages of a field's values. For instance      index=foo |stats count by IP     and the results might be  IP count percentage 10.10.10.1 12 .60 10.10.10.5 1 .05 10.10.10.8 7 .35   I am looking for a clean and efficient way to calculate the percentages, in this case, for the occurrence of an IP for a given time in a search.  I will be using it in an ML density function model, so any other suggestion appreciated as well. Please let me know if you have a suggestion. Thank you
After recently reviewing 8.2.3 hardware requirements, I noticed my deployment is a bit under spec. For instance, Splunk recommends 800 IOPs and 300GB for Search Head node disks. https://docs.splunk... See more...
After recently reviewing 8.2.3 hardware requirements, I noticed my deployment is a bit under spec. For instance, Splunk recommends 800 IOPs and 300GB for Search Head node disks. https://docs.splunk.com/Documentation/Splunk/8.2.3/Capacity/Referencehardware#What_storage_type_should_I_use_for_a_role.3F Search heads with a high ad-hoc or scheduled search loads should use SSD. A HDD-based storage system must provide no less than 800 sustained IOPS. A search head requires at least 300GB of dedicated storage space. Indexers should SSD or NVMe drives.  In my case, I have a dedicated NVMe "data" drive for all indexed data (except _internal) and a SSD drive for the OS and Splunk application (like on the Search Heads).      Does an indexer require the same 800 IOP / 300GB disk as a Search Head?  Does an indexer need to write information to disk per search execution?    Also has anyone experienced issues with using the GP3 disks on SHCs or IDXCs ?  (excluding the use as a specific data drive, which I use NVMe)   Thank you
What settings, functionalities or areas would you check in a Newly installed Splunk Enterprise 8.2.3 making sure all is well? I have a large environment, planning to install ES, Apps & TAs. Planning ... See more...
What settings, functionalities or areas would you check in a Newly installed Splunk Enterprise 8.2.3 making sure all is well? I have a large environment, planning to install ES, Apps & TAs. Planning clustered in as much areas as possible. Thank u for your help & advice?
Hi Splunkers,  I have a 2 hosts i.e server1 & server2. Each host running with multiple processes. Lets say the processes are process1 & process2. I want to create a dashboard to show the latest pr... See more...
Hi Splunkers,  I have a 2 hosts i.e server1 & server2. Each host running with multiple processes. Lets say the processes are process1 & process2. I want to create a dashboard to show the latest processes status whether it is Running or Not Running in each host   index=os host IN (server1 server2)  ARGS=*process1* OR ARGS=*process2* | eval process1_status=if(like(ARGS,"%process1%"),"Running","Not Running") | eval process2_status=if(like(ARGS,"%process2%"),"Running","Not Running") | stats latest(process1_status)  latest(process2_status)  by host | fillnull value=NULL But this query is not giving correct results. Each event will have either ARGS field as process1 or ARGS field as process2.    
Dear All, So I have a Linux script that runs vmstat as a daemon and writes the output every minute to a csv file. Here is some typical output ... _time, metric_name:vmstat.procs.runwait, metric_nam... See more...
Dear All, So I have a Linux script that runs vmstat as a daemon and writes the output every minute to a csv file. Here is some typical output ... _time, metric_name:vmstat.procs.runwait, metric_name:vmstat.procs.blocking, metric_name:vmstat.memory.swapped, metric_name:vmstat.memory.free, metric_name:vmstat.memory.buffers, metric_name:vmstat.memory.cache, metric_name:vmstat.swap.in, metric_name:vmstat.swap.out, metric_name:vmstat.blocks.read, metric_name:vmstat.blocks.written, metric_name:vmstat.system.interupts, metric_name:vmstat.system.contxtswtch, metric_name:vmstat.cpu.user, metric_name:vmstat.cpu.system, metric_name:vmstat.cpu.idle, metric_name:vmstat.cpu.iowait, metric_name:vmstat.cpu.stolen 1637263961, 11, 0, 301056, 13188244, 52, 1645532, 0, 0, 258, 20, 4, 2, 2, 3, 96, 0, 0 1637264021, 3, 0, 301056, 13193028, 52, 1645648, 0, 0, 0, 37, 1480, 2090, 0, 1, 99, 0, 0 1637264081, 3, 0, 301056, 13193448, 52, 1645724, 0, 0, 0, 13, 700, 1097, 0, 0, 100, 0, 0 1637264141, 3, 0, 301056, 13192100, 52, 1645812, 0, 0, 0, 17, 756, 1154, 0, 0, 100, 0, 0 Now every so often I get an error in the message board like The metric value=metric_name:vmstat.procs.runwait provided for source=/opt/splunkforwarder/etc/apps/TA-linux-metrics/log/read_vmstat.log, sourcetype=csv, host=foo.bar.baz, index=lnx_os_metrics is not a floating point value. Using a numeric type rather than a string type is recommended to avoid indexing inefficiencies. Ensure the metric value is provided as a floating point number and not as a string. For instance, provide 123.001 rather than 123.001. This is not consistent and when I look at the file it is perfectly formed, the above is an example that just threw me an error.  The stanza from inputs.conf is [monitor:///opt/splunkforwarder/etc/apps/TA-linux-metrics/log/read_vmstat.log] index = lnx_os_metrics sourcetype = csv I tried with sourcetype csv as well as metrics_csv,  both give the same result.  What on earth could be going on here? Thanks, R.
I'm responsible for a Cisco IM & Presence system.  It can support logging of messages to an external SQL database or a 3rd party compliance server (like Verba). I'm not very familiar with Splunk and... See more...
I'm responsible for a Cisco IM & Presence system.  It can support logging of messages to an external SQL database or a 3rd party compliance server (like Verba). I'm not very familiar with Splunk and its suite of products.  I'm being asked if Splunk can be used to log Jabber instant messages but I'm not sure it can be used in that capacity.  Based on Cisco's IM compliance documentation: https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/im_presence/im_compliance/12_5_1/cup0_b_im-compliance-guide-1251/cup0_b_im-compliance-guide-1251_chapter_01.html it seems like Splunk can be used to view messages in the SQL database being used to archive messages.  Other than that, I've haven't seen any documentation showing that Splunk can be used to view or store Cisco IM & Presence instant messages between Jabber clients. Has anyone had any experience trying to use Splunk to access Cisco IMP Jabber messages?  If so, do you have any experience or documentation that you could share? Thanks,    
I am tearing my hair out trying to figure this one out... I had a powershell input on my UFs (both Win10 and Server 16) that was working fine until last week, when the events mysteriously stopped com... See more...
I am tearing my hair out trying to figure this one out... I had a powershell input on my UFs (both Win10 and Server 16) that was working fine until last week, when the events mysteriously stopped coming into my indexer. Here's the stanza from inputs.conf: [powershell://MPComputerStatus] script = get-mpcomputerstatus schedule = 5 sourcetype = Windows:MPComputerStatus (note that the schedule of 5 is just for debugging currently; normally it is set to 300) Everything appears to be functioning normally on the UF side - I look at splunk-powershell.ps1.log and I see the same lines from before and after the issue started: 11-11-2021...INFO Start executing script=get-mpcomputerstatus for stanza=MPComputerStatus 11-11-2021...INFO End of executing script..., execution time=0.0149976 seconds However, the events do not show up under sourcetype=windows:mpcomputerstatus anymore. All of the Windows event log events are still being forwarded. Here's what I have tried: updating Splunk and the forwarders from 8.1.2 to 8.2.3 running the UF service as both a domain service account (the previous setting) and Local System changing the logging config to DEBUG in log.cfg, log-cmdline.cfg Also, I found it odd that all of my Win10 workstations stopped on the same day, and my Server 2016 machine stopped on a different day. Any ideas?
I see below log in operation_install showing continuous failure to connect to https://gravity-site.kube-system.svc.cluster.local:3009/healthz. ================ Wed Nov 10 02:40:41 UTC [INFO] [DAPD0... See more...
I see below log in operation_install showing continuous failure to connect to https://gravity-site.kube-system.svc.cluster.local:3009/healthz. ================ Wed Nov 10 02:40:41 UTC [INFO] [DAPD02] Executing postInstall hook for site:6.1.48. Created Pod "site-app-post-install-125088-zqsmd" in namespace "kube-system". Container "post-install-hook" created, current state is "waiting, reason PodInitializing". Pod "site-app-post-install-125088-zqsmd" in namespace "kube-system", has changed state from "Pending" to "Running". Container "post-install-hook" changed status from "waiting, reason PodInitializing" to "running". ^[[31m[ERROR]: failed connecting to https://gravity-site.kube-system.svc.cluster.local:3009/healthz Get https://gravity-site.kube-system.svc.cluster.local:3009/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) ^[[0mContainer "post-install-hook" changed status from "running" to "terminated, exit code 255". Container "post-install-hook" restarted, current state is "running". ^[[31m[ERROR]: failed connecting to https://gravity-site.kube-system.svc.cluster.local:3009/healthz Get https://gravity-site.kube-system.svc.cluster.local:3009/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) ^[[0mContainer "post-install-hook" changed status from "running" to "terminated, exit code 255". Container "post-install-hook" changed status from "terminated, exit code 255" to "waiting, reason CrashLoopBackOff". ================ The gravity cluster status after the installation failure: ================ [root@DAPD02 crashreport]# gravity status Cluster name: charmingmeitner2182 Cluster status: degraded (application status check failed) Application: dsp, version 1.2.1 Gravity version: 6.1.48 (client) / 6.1.48 (server) Join token: b9b088ce63c0a703ee740ba5dfb380d Periodic updates: Not Configured Remote support: Not Configured Last completed operation: * 3-node install ID: 46614e3c-fcd1-4974-8cd7-dc404d1880b Started: Wed Nov 10 02:33 UTC (1 hour ago) Completed: Wed Nov 10 02:35 UTC (1 hour ago) Cluster endpoints: * Authentication gateway: - 10.69.80.1:32009 - 10.69.80.2:32009 - 10.69.89.3:32009 * Cluster management URL: - https://10.69.80.1:32009 - https://10.69.80.2:32009 - https://10.69.89.3:32009 Cluster nodes: Masters: * DAPD02 / 10.69.80.1 / master Status: healthy [!] overlay packet loss for node 10.69.89.3 is higher than the allowed threshold of 20% (current packet loss at 100%) [!] overlay packet loss for node 10.69.80.2 is higher than the allowed threshold of 20% (current packet loss at 100%) Remote access: online * DWPD03 / 10.69.80.2 / master Status: healthy [!] overlay packet loss for node 10.69.80.1 is higher than the allowed threshold of 20% (current packet loss at 100%) [!] overlay packet loss for node 10.69.89.3 is higher than the allowed threshold of 20% (current packet loss at 100%) Remote access: online * DDPD04 / 10.69.89.3 / master Status: healthy [!] overlay packet loss for node 10.69.80.2 is higher than the allowed threshold of 20% (current packet loss at 100%) [!] overlay packet loss for node 10.69.80.1 is higher than the allowed threshold of 20% (current packet loss at 100%) Remote access: online ================
Are there any plans to support HTTP/2 for HEC inputs?
I have panel on a dashboard that lists events in a security log.  I can list them by Event ID but I would like it listed by Event ID count so that the most frequent are at the top.  If I change "coun... See more...
I have panel on a dashboard that lists events in a security log.  I can list them by Event ID but I would like it listed by Event ID count so that the most frequent are at the top.  If I change "count by Event" to "count by count" I get an error "The output field 'count ' cannot have the same name as a group by field." <query>index="wineventlog" $Site_Token$ $Cmptr_Token$ $Type$ LogName="Security" Type=Information | stats count by Event</query> How do I get it to list them in descending order by count?
Has anybody used or currently using DB Connect to their Red hat satellite Server?