All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to add new row to my search result using values from the previous result. Basically I am counting few strings and I want to display percent of that matched string in a new row using some mathe... See more...
I want to add new row to my search result using values from the previous result. Basically I am counting few strings and I want to display percent of that matched string in a new row using some mathematical operators or function. Below is what I have done. My first query works fine but second query in append is giving error. Error is: Error in 'eval' command: The expression is malformed. Expected AND.       index="12345" "Kubernetes.namespace"="testnamespace" | bin _time | stats count(eval(searchmatch("String1"))) AS Success count(eval(searchmatch("string2"))) AS Sent count(eval(searchmatch("string3"))) AS Failed | append [ stats eval Success_percent= Success/(Success+Sent +Failed) AS Success eval Sent_Percent= Sent/(Success+Sent +Failed) AS Sent eval Failed_percent= Failed/(Success+Sent +Failed) AS Failed ] | transpose 0 column_name="Status" | rename "row 1" as Count | rename "row 2" as "Percent"      
Hello, We are monitoring Openshift with AppDynamics 21.4.17. We are using auto-instrumentation to monitor some namespaces. We noticed that when some pods are removed, its app agent status shows 0% o... See more...
Hello, We are monitoring Openshift with AppDynamics 21.4.17. We are using auto-instrumentation to monitor some namespaces. We noticed that when some pods are removed, its app agent status shows 0% on the controller, but its machine agent keep showing 100%. The node stays on the controller until it has to manually be deleted. Anyone experienced similar behavior? Thanks,
Our Splunk license usage hit 100% we are not sure how this is happening. We check the DMC and it shows two of our servers and a few clients are sending excessive amounts of event. This  was not happe... See more...
Our Splunk license usage hit 100% we are not sure how this is happening. We check the DMC and it shows two of our servers and a few clients are sending excessive amounts of event. This  was not happening before, turns out someone was messing with the config files on our Splunk server. Would anyone know which config files are causing the issue?  All local input config files were in some way modified. 
We have Security Hub data centralized from all our accounts and have now connected Data Manager to that central account so we can get all Security Hub findings into Splunk Cloud. I have noticed that ... See more...
We have Security Hub data centralized from all our accounts and have now connected Data Manager to that central account so we can get all Security Hub findings into Splunk Cloud. I have noticed that the data coming in has a basic parser but it isn't separating the different streams, i.e. GuardDuty, Config, etc.   Is there a way to properly parse and tag all this data from the Security Hub feed so that it will populate all dashboards and data models etc.? 
subject:  appdynamics-gradle-plugin problem: transformation runs on every buildType built Hello, i'm looking for a way to limit the class-transformation (gradle task is called "transformClassesW... See more...
subject:  appdynamics-gradle-plugin problem: transformation runs on every buildType built Hello, i'm looking for a way to limit the class-transformation (gradle task is called "transformClassesWithAppDynamicsFor<buildType>") to certain build-types. I'm aware of the flag  adeum { .. enabledForDebugBuilds = false .. } to limit transformation to release-builds, but that does not solve our problem, as we build several "release"-builds in our release-job(CI) but only 2 of them need to be instrumented and thus have it's classes transformed. I wouldn't mind it at all if the transform-task would not take more than 6! minutes for a single build-type. I'd appreciate any suggestion on how to achieve this. regards, Jerg
Trying to solve other problem, I started fiddling with outputs on my HFs and did https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-rawat/ along with other twe... See more...
Trying to solve other problem, I started fiddling with outputs on my HFs and did https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-rawat/ along with other tweaks (including lowering absurdly high output queue). The HFs are an intermediate forwarder layer receiving data from several UFs as well as from HEC inputs. Firstly I set up the outputs.conf like that: [tcpout] defaultGroup = my_indexers forwardedindex.filter.disable = true indexAndForward = false useACK=true maxQueueSize = 2GB forceTimebasedAutoLB = false autoLBVolume = 52428800 writeTimeout = 30 connectionTimeout = 10 connectionTTL = 300 heartbeatFrequency = 15 #SSL Settings useSSL = true <here are some SSL settings unimportant here> [tcpout:my_indexers] <here goes list of my indexers of course> When I checked how many times I'm getting a particular event in two relatively mildly utilised indexes (so that I don't kill my indexers by statsing this) - in my case source shows IP of source server so the combination of timestamp, raw event and source should be unique. timstamp and raw on their own can yield the same events from different hosts. index=<something> | eval timeraw=_time."-"._raw.source | stats list(splunk_server) as splunk_server by timeraw | eval c=mvcount(splunk_server) | stats count by c It seemed that for last 15 minutes some110k events are returned once, around 25k events returned twice and 10 events are returned from thrice. While fiddling around with the settings I lowered the autoLBVolume by two orders to just 524288 (each HF has two pipelines and handles around 1MBps traffic so the calculation was pretty much conforming to that LinkedIn article. And magically duplicates don't seem to be showing up in logs. But can someone please tell me why? Why would a chunk of data be sent to multiple indexers when I had bigger autoLBVolume? I don't get it.  
Can someone confirm if there is a way to set a token in Dashboard Studio "in the background"? In classic you could set a token in the source using <init> <init> <set token="cost_per_unit">500</... See more...
Can someone confirm if there is a way to set a token in Dashboard Studio "in the background"? In classic you could set a token in the source using <init> <init> <set token="cost_per_unit">500</set> </init> But so far the only way I can see to set this token is via a text input as below but I don't really want it displayed or amendable in the dashboard     "options": {         "defaultValue": "500",         "token": "cost_per_unit"     },     "title": "Cost per Unit (USD)",     "type": "input.text" } It seems I cannot hide the input, or even place it at the bottom of the dashboard so it isn't too obvious. This seems basic stuff and I'm getting more than a little frustrated with the lack of functionality in Dashboard Studio but accept this may just be me not finding the correct documentation So any pointers would be appreciated
Hi,  I'm trying to index the following sources with the below configs. Im using Splunk UF. Im receiving other logs such as internal , win event security/application so no firewall or communication i... See more...
Hi,  I'm trying to index the following sources with the below configs. Im using Splunk UF. Im receiving other logs such as internal , win event security/application so no firewall or communication issues between the client and server  [WinEventlog://Microsoft-AzureADPasswordProtection-DCAgent/Admin] index=main disabled=0 [WinEventlog://Microsoft-AzureADPasswordProtection-DCAgent/Operational] index = main disabled = 0   Thanks 
How can i ping 1678 host with ping command   | inputlookup host.csv| map search="search [|ping host=$host$] " maxsearches=1678   In my search i can not have a result
Hi Splunk Experts, I've logs where users activites are tracked based on a unique identifiers, I want to display the logs where username is logged, along with couple of lines above and below the use... See more...
Hi Splunk Experts, I've logs where users activites are tracked based on a unique identifiers, I want to display the logs where username is logged, along with couple of lines above and below the username. But the complication here is for the same unqiue identifiers at times username will be logged in more than one line so each logged lines are considered as a new activity, but I want to considet them as a single activity and print only the first match with 4 to 5 lines.    Any idea/ suggestion would be much appreciated. Thanks in advance.
Hello - I need to calculate the average duration between two status types for a user type in a location in a region. I have attached the sample data. For example, I need to calculate the average dura... See more...
Hello - I need to calculate the average duration between two status types for a user type in a location in a region. I have attached the sample data. For example, I need to calculate the average duration a given user type remains in "inactive" status. These user types are segregated into different locations in different regions. Sample data: The result is want is like this (time is in seconds):                    aaa        bbb     ccc Day 1       30            40        10 Day 2       10            20        30 Day 3        20           10       10 and so on I would appreciate any help.
Hello, I'm having an issue with a field search. I have a lookup where I specify for every sourcetype which field is relevant in order to create a ticket. Let's say the csv as the following: sourcet... See more...
Hello, I'm having an issue with a field search. I have a lookup where I specify for every sourcetype which field is relevant in order to create a ticket. Let's say the csv as the following: sourcetype,field sourcetypeA,host sourcetypeB,dest Then, I do a lookup to have this field into an unique field accross the sourcetype: index=test | lookup fields_relation sourcetype OUTPUT relevant_field | eval relevant_host = 'relevant_field' What I want now is to do an eval and set the value of this relevant_field (e.g. For the sourcetypeA I want a variable named relevant_host with the value of host variable). But all the tries let me to only have the string 'host'. I tried do an eval sorrounding the variable between '' with no luck. Still the string field. How can I get the variable value? Thank you!
Following is my query: index=backup | stats count by errors I have thousands of error codes in logs and I need to trigger a unique alert for each error code. Is it possible to create a single alert... See more...
Following is my query: index=backup | stats count by errors I have thousands of error codes in logs and I need to trigger a unique alert for each error code. Is it possible to create a single alert? I have saved the above query with "for each result" option but still get only one trigger with all the error codes. 
Is any document about bootstrap-enterprise.css so that i can use classes in it. thanks
Hello community! I'm looking for a way to optimize this search below and I need some help :   index="oswinsec" source="XmlWinEventLog:Security" TargetUserName Kerberos earliest=-5min | regex Targe... See more...
Hello community! I'm looking for a way to optimize this search below and I need some help :   index="oswinsec" source="XmlWinEventLog:Security" TargetUserName Kerberos earliest=-5min | regex TargetUserName="^([a-z]+)\.([a-z]+)" | regex IpAddress="\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}" | eval Octet1=mvindex(split(IpAddress,"."), 0) | eval Octet2=mvindex(split(IpAddress,"."), 1) | eval Octet3=mvindex(split(IpAddress,"."), 2) | where (Octet1=10 AND Octet2=244 AND Octet3>=192 AND Octet3<=255) OR (Octet1=172 AND Octet2=24) | dedup TargetUserName | table TargetUserName IpAddress     Thanking you!! regards
I am trying develop a react application in Splunk and i want to import bootstrap for use bootstrap grid classes. or is any other similar library i can use? The app is created using @splunk/create ... See more...
I am trying develop a react application in Splunk and i want to import bootstrap for use bootstrap grid classes. or is any other similar library i can use? The app is created using @splunk/create (Splunk UI tool - https://splunkui.splunk.com/Packages/) Thanks
Hello, We are using a Splunk enterprise license currently with 24 gb of license space. Our problem is that are indexing rate is above 1000kb/s and maxing out our license usage. We cannot upgrade our... See more...
Hello, We are using a Splunk enterprise license currently with 24 gb of license space. Our problem is that are indexing rate is above 1000kb/s and maxing out our license usage. We cannot upgrade our license usage due to policies. Our usage reports were not configured, so we cannot see anything through monitoring report. Is there something possible in the inputs or config files that is causing are machines to send such a large amount of info to splunk?  Any help would be appreciated  thanks
Hi All , We need to monitor the CPU utilization of Splunkd. we have installed splunk UF on windows server and want to continuously monitor the CPU Utilization used by splunk uf which is installed ... See more...
Hi All , We need to monitor the CPU utilization of Splunkd. we have installed splunk UF on windows server and want to continuously monitor the CPU Utilization used by splunk uf which is installed on windows servers  Thanks a lot in advance for the help  
Hello amazing community! I'm now stuck with a problem that most probably has a really simple solution   I have a table that is generated every night with a batch process, I would need to merge ... See more...
Hello amazing community! I'm now stuck with a problem that most probably has a really simple solution   I have a table that is generated every night with a batch process, I would need to merge the "today" table with the "yesterday" table and see what is different. This is an example just to keep things easy: Yesterday table: A Old B Old C Old D Old E Old Z Old   Today Table: A New B New C New D New E New F New   Expected result: A Old New B Old New C Old New D Old New E Old New Z Old null F null New   Any idea about how I can achieve this? Many thanks in advance
So currently I have a dashboard that has a single select dropdown, and a trendline visual on the dashboard. I want a way where that trendline panel/visual is hidden if the option is "*" when it is an... See more...
So currently I have a dashboard that has a single select dropdown, and a trendline visual on the dashboard. I want a way where that trendline panel/visual is hidden if the option is "*" when it is any other option the panel should be shown. Below is my SPL for the trendline: <panel> <title>Total Number of Books Read</title> <chart> <search> <query>| inputlookup BooksRead.csv | search Books_Category="$bc$" | table _time Total_Books Total_Books_Read</query> <earliest>-60m@m</earliest> <latest>now</latest> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> Any help would be greatly appreciated.