All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I feel like this should  be a simple solution but I can't find it. So my search gives values that were present from a group both yesterday and today, but I want to extract those that are not present ... See more...
I feel like this should  be a simple solution but I can't find it. So my search gives values that were present from a group both yesterday and today, but I want to extract those that are not present both days. My search is currently doing this: Group Values_ today Values_ yesterday Count_ today Count_ yesterday change a 111 333 444 555 111 222 333 444 555 4 5 -1 b 111 222 333 111 222 333 3 3 0 c 111 222 333 666 111 222 333 444 555 666 4 6 -2 d 111 222 333 111 222 3 2 +1   Here is the desired output: Group Values_ today Values_ yesterday Count_ today Count_ yesterday change Missing_from_ today Missing_from_ yesterday a 111 333 444 555 111 222 333 444 555 4 5 -1 222   b 111 222 333 111 222 333 3 3 0     c 111 222 333 666 111 222 333 444 555 666 4 6 -2 444 555   d 111 222 333 111 222 3 2 +1   333
I would like to dynamically create AWS Metadata inputs using Splunk's REST API. When referencing Splunk's documentation, it looks to be incomplete and incorrect. https://docs.splunk.com/Documentatio... See more...
I would like to dynamically create AWS Metadata inputs using Splunk's REST API. When referencing Splunk's documentation, it looks to be incomplete and incorrect. https://docs.splunk.com/Documentation/AddOns/released/AWS/APIreference Has anyone done this?
I have a lookup which contains varying information which is linked to a user field in the lookup. I need a way to to be able to allow users to see this lookup but only see the rows which are linked t... See more...
I have a lookup which contains varying information which is linked to a user field in the lookup. I need a way to to be able to allow users to see this lookup but only see the rows which are linked to their user account. I can do this in search by using a rest command to pull the users user name and match this to the user field but there's nothing stopping them from just removing the rest line and seeing everything thats in that lookup. I've also tried moving the data into an index and having the rest built into the search filter but splunk doesn't allow subsearches in searchfilter so this also doesn't work. So i need a way to auto apply the filter whenever anyone tries to access the lookup and/or index, I believe there may be a way to use python to add a role which will allow them to run the filtered search on a dashboard and then remove the role once the search has been completed but my knowledge of python is nil.  Has anyone got any experience with building or working with something similar?   | inputlookup lookup.csv | search     [| rest /services/authentication/current-context/context     | table username     | rename username as user]
Hello Members, This is a great source of information and help. I am using the Splunk Add on for Squid Proxy. I am getting data from the squid access log in the recommended splunk format. The dash... See more...
Hello Members, This is a great source of information and help. I am using the Splunk Add on for Squid Proxy. I am getting data from the squid access log in the recommended splunk format. The dashboard that comes with the install is find. I am creating another dashboard based on that dashboard. I am monitoring bytes_in, bytes_out, and bytes. It seems that the Splunk search for bytes is the total of bytes_in and bytes_out. I am using a search that returns the sum of bytes_out using | status sum(bytes_out) for all src_ips. like this: index=squid | stats sum(bytes_out) as TotalBytes | eval gigabytes=TotalBytes/1024/1024/1024 | table gigabytes I do the same thing for "bytes"  Is there some way I can create a visualization, using a single value viz, so I can show bytes per time? Like x bytes / hour, maybe even one of those gauges? I would like to use a time picker with this as well - or a selectable span, etc Would the "timechart" allow me to do this? Thanks so much, eholz1  
Hello, I am stuck on a query and need someone's help please.  The goal of the query is to perform a lookup on column A and B which is a list of hostnames and FQDN's that are the targeted scope to per... See more...
Hello, I am stuck on a query and need someone's help please.  The goal of the query is to perform a lookup on column A and B which is a list of hostnames and FQDN's that are the targeted scope to perform the extended lookup.  I need to find out what new local accounts have been created AND who created them.  OS scope would be Windows for now, however i will need to do this search on *NIX servers as well. The query itself works, but i don't know if the input scope is being targeted for sure or what is the best practice method.  There are 2 columns im focused on in the csv, "name" and "fqdn". I have done extensive research on this and one article mentions to put in [] brackets after the main query, but then another article states to put in the inputlookup query as first string and remaining questions next.  Let me know what is right/wrong or reasons why to do either way? Here is my query:   sourcetype=wineventlog source="WinEventLog:Security" (EventCode=4720 OR EventCode=624) | eval CreatedBy = mvindex(Account_Name,0) | eval New_User = mvindex(Account_Name,1) | search CreatedBy=* | table _time ComputerName EventCode CreatedBy New_User name ip_address | sort by ComputerName, _time [|inputlookup Servers.csv |fields fqdn, name |lookup Servers fqdn AS ComputerName, name AS ComputerName ]    
Hi Splunkers, I have the following doubt: suppose I have a scenario where data are collected by Universal forwarders, that forward them to a cluster of HF. This cluster is, of course, created to  ... See more...
Hi Splunkers, I have the following doubt: suppose I have a scenario where data are collected by Universal forwarders, that forward them to a cluster of HF. This cluster is, of course, created to  grant HA. The primary HF goes down, so the UFs start to senda data to secondary; the time need by UF to get the primary down and starts to send data to secondary is known/calculable?  
Before creating a lookup using the outputlookup command, I specified which fields I wanted and in which order I wanted them using the table commands. When I do a | inputlookup lookup.csv  the fields ... See more...
Before creating a lookup using the outputlookup command, I specified which fields I wanted and in which order I wanted them using the table commands. When I do a | inputlookup lookup.csv  the fields are there but they are in alphabetical order. How can I do a lookup with the lookup and get the fields in the order I specified? 
Basically, I want to create an alert than runs a particular search that we are running manually when the login failure limit is greater than 30. Then I want the search to stop once the login failure ... See more...
Basically, I want to create an alert than runs a particular search that we are running manually when the login failure limit is greater than 30. Then I want the search to stop once the login failure limit drops back below 15, then to output the results via email. I am getting frustrated because I can't seem to find anything that I can use to achieve this result Any help would be greatly appreciated
In a Splunk Enterprise instance, will configuring a universal forwarder to clone all event logs to two indexers result in double the license usage? The intent is for all events to be mirrored in two... See more...
In a Splunk Enterprise instance, will configuring a universal forwarder to clone all event logs to two indexers result in double the license usage? The intent is for all events to be mirrored in two separate indexes.   
I'm looking to rollout UF without the need for a deployment server - we have zero infrastructure and I'd rather not have to start hosting/monitoring/patching VMs to utilize Splunk Cloud. I'm looking... See more...
I'm looking to rollout UF without the need for a deployment server - we have zero infrastructure and I'd rather not have to start hosting/monitoring/patching VMs to utilize Splunk Cloud. I'm looking to automate the install with a cmd/powershell script rolled out through MDM, and I'm struggling to find the msi switches or examples for this scenario. I'll likely manage the installs through MDM also. Can someone detail how to install Universal Forwarder on Windows for Splunk Cloud without a deployment server - I have the installer and the credentials pack but the MSI guides don't note how to pass the credential pack to the installer or what settings to use for the RECEIVING_INDEXER flag, if it is still required. Many thanks for any help with this
Hi All,         Having these 2 monitor stanze in one inputs.conf, but able to get data only for latest one monitor stanza.         In both the file names strings are similar but numbers are chang... See more...
Hi All,         Having these 2 monitor stanze in one inputs.conf, but able to get data only for latest one monitor stanza.         In both the file names strings are similar but numbers are changing         Is that because of the crcSalt, it's considering only one "Backup*" file name?         Also do the sourcetype needs to be different incase if we are adding "Backup*" in monitor stanza? File name for first stanza  - BackupJobSummaryReport_3696_6883300_1588_1678235411 File name for second stanza  - BackupJobSummaryReport_19068_455837_9176_1678233614 [monitor://\\A11FLSCOMMVAULT\commvault_reports\Final_Report\Backup*] index = backup disabled = 0 sourcetype = commvault crcSalt = <SOURCE> [monitor://\\A11FLSCOMMVAULT\commvault_reports\LISTAPAC\Backup*] disabled = 0 index = backup sourcetype = commvault crcSalt = <SOURCE>
i'm using the below analytics query to fetch EUM data on daily basis and uploading to other data source in AWS. SELECT eventTimestamp, pagetype, pageexperience, pageurl, metrics.`End User Response T... See more...
i'm using the below analytics query to fetch EUM data on daily basis and uploading to other data source in AWS. SELECT eventTimestamp, pagetype, pageexperience, pageurl, metrics.`End User Response Time (ms)` AS EURT_ms FROM browser_records I looking for a REST API option to fetch data from above query. i have gone through couple of post and tried the curl command using global account name and api key but getting error curl -X POST "http://analytics.api.appdynamics.com/events/query" -H"X-Events-API-AccountName:<<>>" -H"X-Events-API-Key:<<>>" -H"Content-type: application/vnd.appd.events+json;v=2" -d 'SELECT * FROM browser_records LIMIT 5' {"statusCode":500,"code":"Unknown","message":"Unknown server error.","developerMessage":null,"logCorrelationId":"5df56f18-710e-4d4c-9227-c29c78b34ba0"}curl: (6) Could not resolve host: * curl: (6) Could not resolve host: FROM curl: (6) Could not resolve host: browser_records curl: (6) Could not resolve host: LIMIT curl: (6) Could not resolve host: 5'
Hi All, Can you please advise how to ingest EMC storage Audit logs into Splunk? If possible I need step by step approach as I'm new with Splunk administration.  Thanks
  Hi All, I'm bit new with Splunk. I'm trying to ingest CiscoAMP logs using Cisco AMP for Endpoints App. I have installed the App on Heavy Forwarder and configured it with API client and ID.  H... See more...
  Hi All, I'm bit new with Splunk. I'm trying to ingest CiscoAMP logs using Cisco AMP for Endpoints App. I have installed the App on Heavy Forwarder and configured it with API client and ID.  However, I'm not able to create new input because I'm getting the following error. when I checked the status of the KVStore on HF it was failed. Please assist me in fixing this, Thanks Regards, Inayath  
Hi everyone, I have asked a similar question and i got the answer. https://community.splunk.com/t5/Dashboards-Visualizations/How-to-format-JSON-data-into-a-table/m-p/615115#M50466  But my doubt is... See more...
Hi everyone, I have asked a similar question and i got the answer. https://community.splunk.com/t5/Dashboards-Visualizations/How-to-format-JSON-data-into-a-table/m-p/615115#M50466  But my doubt is if the fields in the JSON file are dynamic how can i get those into table. { "Info": { "Unit": "ABC", "Project": "XYZ", "Analysis Summary": { "DB1": { "Available": "1088kB", "Used": "173.23kB", "Used(%)": "15.92%", "Status": "OK" }, "DB2": { "Available": "4096kB", "Used": "1591.85kB", "Used(%)": "38.86%", "Status": "OK" }, "DB3": { "Available": "128kB", "Used(%)": "2.6%", "Status": "OK" }, "DB4": { "Available": "16500kB", "Used": "6696.0", "Used(%)": "40.58%", "Status": "OK" }, "DB5": { "Available": "22000kB", "Used": "9800.0", "Used(%)": "44.55%", "Status": "OK" } }, "RAM_Tracking": { "a": "2", "b": "1088.0", "c": "32.1220703125", }, "Database2_info": { "a": "4", "b": "4096.0", "c": "654.3212890625", }, "Database3_info": { "a": "5", "b": "6696", "c": "9800", }, "Database4_info": { "a": "6", "b": "128.0", "c": "21.086", } } } As you see in the field "Used" is missing in DB3.But i want to show it in table as empty. Database available used used% status DB1 4096KB 1582.07kB 38.62% OK DB2 1088kB 172.8kB 15.88% OK DB3 128KB NA 0% OK DB4 22000KB 9800.0KB 44.55% OK DB5 16500KB 6696.0KB 40.58% OK Wherever there is no data i want to keep it as "NA". Till now i have only used constant data. Is it possible to create a table like this using dynamic data? Can anyone please help me.
Hi all Ninja's i need some help here to find this calculation which can be done easily in excel but i wanted to convert to SPL. I have tried to use streamstats but and autoregress but unable to figur... See more...
Hi all Ninja's i need some help here to find this calculation which can be done easily in excel but i wanted to convert to SPL. I have tried to use streamstats but and autoregress but unable to figure out how to do it. Problem statement: How do I convert the calculation for mean deviation.  below formula is applied in col : j explanation of the formula, calculation starts at count of 20, it takes the value of col : i and subtract col : h once it done with its own row take the same i value and subtract from "count - 1" after it finishes 20 period. hopefully someone could really help as this part of calculation has me stuck for a few days which i unable to submit this to my boss for system migration to splunk.  =(ABS(I22-H22)+ABS(I22-H21)+ABS(I22-H20)+ABS(I22-H19)+ABS(I22-H18)+ABS(I22-H17)+ABS(I22-H16)+ABS(I22-H15)+ABS(I22-H14)+ABS(I22-H13)+ABS(I22-H12)+ABS(I22-H11)+ABS(I22-H10)+ABS(I22-H9)+ABS(I22-H8)+ABS(I22-H7)+ABS(I22-H6)+ABS(I22-H5)+ABS(I22-H4)+ABS(I22-H3))/20             col : h col : i col : j count date stop top bottom avg_distance moving_avg_20_period Mean Deviation 1 2022-08-01 2.29 2.31 2.265 2.2883     2 2022-08-02 2.31 2.37 2.28 2.32     3 2022-08-03 2.27 2.36 2.24 2.29     4 2022-08-04 2.35 2.36 2.26 2.3233     5 2022-08-05 2.44 2.57 2.34 2.45     6 2022-08-08 2.49 2.51 2.41 2.47     7 2022-08-09 2.41 2.52 2.35 2.4267     8 2022-08-10 2.51 2.59 2.38 2.4933     9 2022-08-11 2.49 2.5176 2.43 2.4792     10 2022-08-12 2.38 2.5 2.26 2.38     11 2022-08-15 2.35 2.41 2.32 2.36     12 2022-08-16 2.39 2.41 2.31 2.37     13 2022-08-17 2.31 2.38 2.3 2.33     14 2022-08-18 2.28 2.32 2.23 2.2767     15 2022-08-19 2.11 2.32 2.055 2.1617     16 2022-08-22 2.08 2.19 2.07 2.1133     17 2022-08-23 2.02 2.105 1.92 2.015     18 2022-08-24 2.01 2.06 1.94 2.0033     19 2022-08-25 2.01 2.07 1.87 1.9833     20 2022-08-26 2.02 2.06 1.93 2.0033 2.2769 0.13814 21 2022-08-29 2.02 2.04 1.96 2.0067 2.2628 0.15529 22 2022-08-30 2.205 2.22 2 2.1417 2.2539 0.160265 23 2022-08-31 2.09 2.25 2.07 2.1367 2.2462 0.16509 24 2022-09-01 1.92 2.16 1.92 2 2.23 0.173545 25 2022-09-02 1.86 1.99 1.83 1.8933 2.2022 0.1766 26 2022-09-06 1.8 1.86 1.73 1.7967 2.1685 0.176745 27 2022-09-07 1.84 1.85 1.75 1.8133 2.1379 0.175175 28 2022-09-08 1.7 1.85 1.665 1.7383 2.1001 0.174805 29 2022-09-09 1.75 1.76 1.63 1.7133 2.0618 0.17136
I'm an admin on a trial account and I am unable to create a RUM Token from Settings > Access Tokens. I only have options Ingest Token or Api Token. The documentation says [here](https://docs.splunk.... See more...
I'm an admin on a trial account and I am unable to create a RUM Token from Settings > Access Tokens. I only have options Ingest Token or Api Token. The documentation says [here](https://docs.splunk.com/Observability/rum/set-up-rum.html) that there should be a RUM Token option at this location but it is not available. How can I create a RUM Token? 
In order to upgrade Splunk from 8.1.3 to 9.0.4, I need to migrate/upgrade the KVstore engine from MMAPv1 to WiredTiger and then upgrade the KVstore (mongoDB) version to 4.2. I believe that I only ne... See more...
In order to upgrade Splunk from 8.1.3 to 9.0.4, I need to migrate/upgrade the KVstore engine from MMAPv1 to WiredTiger and then upgrade the KVstore (mongoDB) version to 4.2. I believe that I only need to upgrade the KVstore on instances with active KVstore collections.   And I should stop scheduled searches writing to collections or wait until they are finished before running the migration. By default the KVstore is enabled on all instances which is a bit confusing, BUT the monitoring console (MC)  KVstore dashboard gives me some indication that only the instances with active kvstore collections are the SHC and some HFs...  but not my MC/license master or the Deployment Server or the Index Cluster master. Splunk docs mention that you can disable kvstore and ignore certain collections. My question is how to verify and identify the active collections on my instances? Where is the kvstore files location? (apparently the directory seems to be /opt/splunk/var/lib/splunk/kvstore/mongo)  but the files are not human-readable clear...  Is there a query to see the individual KVstore collection file sizes with human-readable names? This gives me I think most of them... not sure | rest /servicesNS/-/-/data/transforms/lookups splunk_server=local | table eai:acl.app title collection | search collection!="" Any advice appreciated. Thank you  
I'm using the map command to iterate through a list of devices and forecasting some of the metrics associated with each device.  That's all working but what I really want is to then average the retur... See more...
I'm using the map command to iterate through a list of devices and forecasting some of the metrics associated with each device.  That's all working but what I really want is to then average the returned results down to a single number per device.   The query returns 104 rows per device.  I want to be able to average them as a single number per device but no matter what I pipe to it simply returns all of the data.   I'd appreciate some guidance on making this work.         | inputlookup array_stats.csv | dedup Array_Name | map maxsearches=1000 search=" inputlookup array_stats.csv | search Array_Name=$Array_Name$ | timechart avg(IOPS) as avgIOPS avg(ReadRT) as avgReadRT avg(WriteRT) as avgWriteRT values(Array_Name) as ArrayName span=1d | predict "avgIOPS" as predIOPS "avgReadRT" as predReadRT "avgWriteRT" as predWriteRT future_timespan=14 | eventstats avg(avgIOPS) avg(avgReadRT) avg(avgWriteRT) avg(predIOPS) avg(predReadRT) avg(predWriteRT) by ArrayName"        
Hello, So currently I have a trendline like below...    But I need to have the visual in a way where it shows the stats sum(books) for another date which shows the trend of what it was 4 w... See more...
Hello, So currently I have a trendline like below...    But I need to have the visual in a way where it shows the stats sum(books) for another date which shows the trend of what it was 4 weeks ago for the stats sum (books) and what it is currently, i tried using span but what that does is it shows me how many books for that particular day and not the stats sum(books) in total. I need something like below.. any help would be greatly appreciated.