All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have 2 multivalue collumns like below,giving two rows for example: Collumn 1      collumn 2 A                           A B                           C C A                            A ... See more...
I have 2 multivalue collumns like below,giving two rows for example: Collumn 1      collumn 2 A                           A B                           C C A                            A B                            B A C C i want a third column like this(having values of collumn1 which are in collumn 2) Collumn 1      collumn 2        collumn 3 A                           A                                 A B                           C                                 C C A                            A                                 A B                            B                                 B A                                                                A C C   Please note,Collumn 1 can be empty also. Thanks in Advance.
Hello, Trying to know, if it is possible to measure Memory Available using the sai_metrics_indexes. More details of the requirement is below: I have a process which starts and then runs for quite s... See more...
Hello, Trying to know, if it is possible to measure Memory Available using the sai_metrics_indexes. More details of the requirement is below: I have a process which starts and then runs for quite some time. I am able to get the start time and end time of that process run using the below query: index=test sourcetype="test:node" "processStart()" OR "processEnd()" | stats earliest(_time) AS Earliest, latest(_time) AS Latest | eval diff=Latest-Earliest | eval FirstEvent=strftime(Earliest,"%m/%d/%y %H:%M:%S") | eval LastEvent=strftime(Latest,"%m/%d/%y %H:%M:%S") | eval DiffEvent=strftime(diff,"%m/%d/%y %H:%M") | eval temp = tostring(diff,"duration") | eval NetTotalTime=replace(temp,"(\d*)\+*(\d+):(\d+):(\d+)","\1 days \2 hours \3 minutes \4 secs") | rename FirstEvent as ProcessStart, LastEvent as ProcessEnd | table ProcessStart, ProcessEnd, NetTotalTime ProcessStart returned from the above query is: 08/11/20 06:01:46 ProcessEnd returned from the above query is: 08/11/20 11:35:09 Now using this ProcessStart and ProcessEnd time, I want to find out my memory used, memory available during that time. In general I use the below query to find out the memory available: | mstats avg(_value) prestats=true WHERE metric_name="Memory.Available_Bytes" AND "index"="em_metrics" AND "host"="abc" AND `sai_metrics_indexes` span=10s | timechart avg(_value) AS Avg span=10s | fields - _span* Problem with the above query is that, it gives me the data according to the time range I specify in the time picker. Instead I want to search this query within my ProcessStart  and ProcessEnd. Also, is it possible to use both the query in a single search so that I can generate a report from it.   Hope the question is clear. Looking forward to hear from someone soon   
Hi All, Is there a way we can calculate the number of times a value appear in a multi value field into a separate field? TIA
This is related to HEC queue size. When I execute "index=_internal host=abc group="queue" name="httpinputq" | eval name=name+":"+host | stats values(name) by max_size_kb" => max_size_kb value showin... See more...
This is related to HEC queue size. When I execute "index=_internal host=abc group="queue" name="httpinputq" | eval name=name+":"+host | stats values(name) by max_size_kb" => max_size_kb value showing as 107520KB. Based on how indexing works diagram https://wiki.splunk.com/Community:HowIndexingWorks, HEC uses httpinputq but I am not able to find anything related to httpinpuq in Splunk Docs. I am not sure from which configuration file max_size_kb value showing as 107520KB. I verified in all server.conf and inputs.conf files, but with no luck. Need help here to understand the source of max_size_kb value. I also referred but with no luck: https://community.splunk.com/t5/All-Apps-and-Add-ons/When-HF-with-quot-Splunk-DB-Connect-quot-send-data-to-Indexer/td-p/418246
Hi all, We are currently migrating several java applications from from Windows hosts to RHEL so we are trying to match the source configuration as close as possible. The issue is that we have 2 app... See more...
Hi all, We are currently migrating several java applications from from Windows hosts to RHEL so we are trying to match the source configuration as close as possible. The issue is that we have 2 applications per machine, and the convention is to have spaces in the application name, so the following things are true: Passing spaces in as jvm args doesn't seem to work with the application config, although you would assume that using quotes around the key-value pair, or just the value would work. This was not a problem with the args in the Windows registry I can use spaces in the controller-info.xml however I'm still yet to figure out how we can have 2 separate configurations with the xml files. So, my questions become: Is there a way for appd to handle spaces in the JVM arguments to define applicationname/teir/nodename etc? If not, how can I handle multiple configurations with 2 JVMs reporting to the same agent? Thanks.
Hi Team, The following inputs.conf works on localhost to monitor a registry key, but not working on the universal forwarder. [WinRegMon://HKLM] baseline=1 disabled=0 hive=\\REGISTRY\\MACHINE\\SY... See more...
Hi Team, The following inputs.conf works on localhost to monitor a registry key, but not working on the universal forwarder. [WinRegMon://HKLM] baseline=1 disabled=0 hive=\\REGISTRY\\MACHINE\\SYSTEM\\*ControlSet*\\Services\\LanManServer\\Shares\\?.* index=windows proc=.* type=set|create|delete|rename BTW even the following hive attribute too works fine on local host but not on universal forwarder hive=HKEY_LOCAL_MACHINE\\SYSTEM\\*ControlSet*\\Services\\LanManServer\\Shares\\?.* But the default configuraiton of inputs.conf works on both local host and the universal forwarder. [WinRegMon://default] disabled = 0 hive = .* proc = .* type = rename|set|delete|create index = windows   Any references are much helpful.
I am executing several queries to MongoDB through DBConnect (v3.1.4) and UnityJDBC driver. I'm experiencing issues getting results for anything but the simplest SQL queries. All queries take a very l... See more...
I am executing several queries to MongoDB through DBConnect (v3.1.4) and UnityJDBC driver. I'm experiencing issues getting results for anything but the simplest SQL queries. All queries take a very long amount of time compared to when I run the same SQL queries through a UnityJDBC client. Here's an example of the kinds of queries that are able to return results Query Splunk Exec Time UnityJDBC Client Exec Time SELECT COUNT(_id) FROM docs 8 - 40 sec 0.1 sec SELECT * FROM docs WHERE status='ERROR' 20 - 40 sec 0.06 sec   Now if I try to use dbxquery to execute any join queries or anything slightly more complex than the above I get the following error:           com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=/dev-db:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: /dev-db}, caused by {java.net.UnknownHostException: /dev-db}}]             Here's a couple of examples of queries which work in UnityJDBC's client, but fail in splunk after roughly 30 seconds yielding the above error.           SELECT * FROM StatusHistory WHERE CAST(ModifiedOn, 'DATE') > '2020-08-07' SELECT * FROM StatusHistory sh JOIN docs d ON sh.doc_id = d.id SELECT Status, COUNT(Status) FROM StatusHistory GROUP BY Status HAVING COUNT(Status) > 6             I don't believe there is an intermittent network issue causing this because the queries that fail always fail in the same way and the queries that produce results always produce results (sometimes after greater than 30 seconds if there are enough concurrent dbxqueries on the system). What could be causing the timeout? In the error log it looks like the host address is missing some data, like an IP address or a hostname. It's strange this is only happening for the more complex queries and not for everything. Why are dbxqueries so slow compared to a JDBC Client connected to the same database?
I have an autogenerated dashboard with 160 panels. The good news: Each panel's search uses an accelerated saved search. The bad news: I have so many panels that I frequently see this message in about... See more...
I have an autogenerated dashboard with 160 panels. The good news: Each panel's search uses an accelerated saved search. The bad news: I have so many panels that I frequently see this message in about 40 of the panels: Search not executed: The maximum number of concurrent historical searches on this instance has been reached., concurrency_category="historical", concurrency_context="instance-wide", current_concurrency=50, concurrency_limit=50   If I hover over those panels and click the "Refresh" icons, after the other searches have finished, the search tries again and succeeds. I see the information I was looking for in the panel.   I accept that there is a limit to concurrent searches. I'm willing to wait. But clicking the individual "Refresh" icon in each of panels with the "Search not executed" message is a lot of work.   Ideally, I'd like the search panel to automatically retry as capacity frees up, without me clicking the "Refresh" icon. Is that possible?   If it's not possible, is there a programmatic way for me to automatically kick off the "Refresh" icon? I tried using the JavaScript console with something like:  mvc.Components.get("baseSearch_item3_P1").startSearch(); but it couldn't resolve the mvc object. I was thinking I could walk through the DOM and execute an action for each panel that contains the text "Search not Executed", but I don't have the expertise to figure out how to do that and I can't even get the startSearch() action working for one individual search. Any tips or workarounds around automatically refreshing with one JavaScript command via the JavaScript console?   As a last ditch workaround, I've separated the dashboard into 6 sections with "depends" tokens that are controlled by an input dropdown. Each dropdown item loads 1/6th of the dashboard at a time. It's an ugly, hacky way that I really want to avoid. My coworkers will laugh at me for doing that.
I have installed the latest version of SSL checker on our Search Head that is running Enterprise 7.3.3. The associated dashboard works as intended except when I update and/or remove certs the dashboa... See more...
I have installed the latest version of SSL checker on our Search Head that is running Enterprise 7.3.3. The associated dashboard works as intended except when I update and/or remove certs the dashboard does not refresh (still showing old certs that no longer reside on the search head). I have performed reload exec, debug refresh, and restarted the server with no success at resolving the issue. Within the manual setup portion of the app I also reduced the list of certs to monitor to just one.  Even after does all the same above troubleshooting actions the SSL Dashboard is still showing all the previous monitored certs to include those that have since been removed or updated on the server.  Please advise what other troubleshooting steps I should try.    @jkat54 
Hello,   I need your recommandations and your advices about the configurations for tracking the changements and the modifications done on SPLUNK (on all instances : SH, IX, UF, HF ….).   Thank yo... See more...
Hello,   I need your recommandations and your advices about the configurations for tracking the changements and the modifications done on SPLUNK (on all instances : SH, IX, UF, HF ….).   Thank you dear in advance !
We recently stood up our first app in Azure but I came to learn that our developers are using SignalR for communication and page-to-page transitions so our AppD agent doesn't pick these up.  For the ... See more...
We recently stood up our first app in Azure but I came to learn that our developers are using SignalR for communication and page-to-page transitions so our AppD agent doesn't pick these up.  For the frontend of the app, in AppD all I see are calls to the /_Host segment and nothing else. Has anyone else been able to monitor SignalR communications page views with the .NET core agent on Azure?
Hi, Below is my search query: index=abc host=xyz source=abcdef | rename size AS RootObject.size topicName AS RootObject.topicName | fields "_time" "host" "source" "sourcetype" "RootObject.size" "... See more...
Hi, Below is my search query: index=abc host=xyz source=abcdef | rename size AS RootObject.size topicName AS RootObject.topicName | fields "_time" "host" "source" "sourcetype" "RootObject.size" "RootObject.topicName" | eval "RootObject.topicName"='RootObject.topicName', _time='_time' | timechart dedup_splitvals=t limit=100 useother=t sum(RootObject.size) AS "Sum of size" span=1d by RootObject.topicName usenull=f | sort limit=0 _time | fields _time properties.dta properties.mta Search Result: _time                           properties.dta    properties.mta 2020-08-07 00:00  | 2149528  | 25167867 2020-08-07 04:00  | 151400     | 1522424 2020-08-08 00:00  | 2299209  | 24934163 2020-08-08 04:00  |                      | 1769140 As seen above I get data at 12.00 am and 4.00 am; How can i combine i.e (sum) single days data in just one row? Pleas
These are two question that that i need to solve. Memory loss by time *since boot* aggregated across entire population. Memory loss by wall clock time aggregated across entire population.      ... See more...
These are two question that that i need to solve. Memory loss by time *since boot* aggregated across entire population. Memory loss by wall clock time aggregated across entire population.      base query (index=metrics OR index=hc_trials OR index=hc_prod) uptime>1800 (HCTELEM OR HCJUNK) | fields + payload version deviceid | eval payload=replace(payload, "\"\"", "\"") | spath input=payload output=Mem1 path=Mem{1}   Please help me to solve this. TIA
Hello, what is the recommended way to automate a db backup to QNAP NAS every 24 hrs. (SPLUNK enterprise 8.0.4). is there a way to do it from the GUI or it must be done at the CLI. The server is runn... See more...
Hello, what is the recommended way to automate a db backup to QNAP NAS every 24 hrs. (SPLUNK enterprise 8.0.4). is there a way to do it from the GUI or it must be done at the CLI. The server is running low on disk space and i have a 24 TB qnap sitting around.
So we put a couple peers into manual detention to let them "cool off" before doing some maintenance.  Problem is, people were still getting sent to those members despite manual detention, and as a re... See more...
So we put a couple peers into manual detention to let them "cool off" before doing some maintenance.  Problem is, people were still getting sent to those members despite manual detention, and as a result, couldn't search. Is there a single unauthenticated call I can make via REST from my load balancer to remove manual detention members from the pool?  If so, what is it?  Probably gonna dig through the REST API tonight and (hopefully) find it.  Someone...  beat me to it!
hi, Match below condition is not working I also tried by using <done></done> as well could someone help with this   <drilldown>       <condition match="$click.value2$==foo">                 <set... See more...
hi, Match below condition is not working I also tried by using <done></done> as well could someone help with this   <drilldown>       <condition match="$click.value2$==foo">                 <set token="foo">true</set>        </condition> </drilldown>
My Splunk query, which I included below, generates a table, which appears as follows. The issue that I'm trying to resolve is being able to populate non-existent values with "No Data", as shown in th... See more...
My Splunk query, which I included below, generates a table, which appears as follows. The issue that I'm trying to resolve is being able to populate non-existent values with "No Data", as shown in the 2020-08-11 column. There are other date columns with non-existent values (note, these are not just null values, which have been set to filnull value = 0. These are non-existent values.) Can someone provide some assistance on how to do this? I have used fillnull and filldown, but have not been successful. I have also tried eval statements setting the parameter to null. Service ID Resource Name Transaction Name Priority Service Area Consumer 2020-08-12 2020-08-11 2020-08-10 2020-08-09 ID1 GET Transaction1 1 Area1 App1 3   4 0 ID2 PUT Transaction2 2 Area2 App2 8   2 5 index=test_index_1 sourcetype=test_sourcetype_2 | eval epoch_Timestamp=strptime(Timestamp, "%Y-%m-%dT%H:%M:%S.%3QZ")-14400 | rename "Transaction Name" as trans_name, "Application Name" as application_name, "Status Code" as status_code | eval service_id=case(Verb="GET" AND trans_name="Transaction1" AND application_name="APP1", "ID1", Verb="GET" AND trans_name="Transaction2" AND application_name="App2", "ID2", Verb="PUT" AND trans_name="Transaction2" AND application_name="App2", "ID3", 1=1, "Unqualified") | where service_id!="Unqualified" | eval Priority=case(Verb="GET" AND trans_name="Transaction1" AND application_name="APP1", "2", Verb="GET" AND trans_name="Transaction2" AND application_name="App2", "2", Verb="PUT" AND trans_name="Transaction2" AND application_name="App2", "1", 1=1, "Unqualified") | where Priority!="Unqualified" | eval service_area=case(Verb="GET" AND trans_name="Transaction1" AND application_name="APP1", "Area1", Verb="GET" AND trans_name="Transaction2" AND application_name="App2", "Area2", Verb="PUT" AND trans_name="Transaction2" AND application_name="App2", "Member", 1=1, "Unqualified") | where service_area!="Unqualified" | eval date_reference=strftime(epoch_Timestamp, "%Y-%m-%d") | stats count(eval(status_code)) as count by service_id, Verb, trans_name, Priority, service_area, application_name, date_reference | eval combined=service_id."@".Verb."@".trans_name."@".Priority."@".service_area."@".application_name."@" | xyseries combined date_reference count | rex field=combined "^(?<service_id>[^\@]+)\@(?<Verb>[^\@]+)\@(?<trans_name>[^\@]+)\@(?<Priority>[^\@]+)\@(?<service_area>[^\@]+)\@(?<application_name>[^\@]+)\@$" | fillnull value="0" | table service_id, Verb, trans_name, Priority, service_area, application_name [ makeresults | addinfo | eval time = mvappend(relative_time(info_min_time,"@d"),relative_time(info_max_time,"@d")) | fields time | mvexpand time | makecontinuous time span=1d | eval time=strftime(time,"%F") | reverse | stats list(time) as time | return $time ] | rename service_id as "Service ID", Verb as "Resource Name", trans_name as "Transaction Name", Priority as "Priority", service_area as "Service Area", application_name as "Consumer"
Hi, I am new in Splunk Enterprise, I need your help to get the sample data uploaded on Splunk. I got the sample data from Splunk-7-Essentials-Third-Edition-master and it is inside the folder: C:\Spl... See more...
Hi, I am new in Splunk Enterprise, I need your help to get the sample data uploaded on Splunk. I got the sample data from Splunk-7-Essentials-Third-Edition-master and it is inside the folder: C:\Splunk-7-Essentials-Third-Edition-master\Chapter01\eventgen If this the location of my app -> $SPLUNK_HOME\etc\apps\destination, and I have placed eventgen.conf inside the local, i.e.  -> $SPLUNK_HOME\etc\apps\destination\local. The sample data is under new folder 'samples': $SPLUNK_HOME\etc\apps\destination\samples Now, this is what my eventgen.conf looks like: --------- # Note, these samples assume you're installed as an app or a symbolic link in # $SPLUNK_HOME/etc/apps/eventgen. If not, please change the paths below. # Modified by ericksond [destinations.sample] mode = sample sampletype = csv outputMode = splunkstream interval = 10 earliest = -10s latest = now count = 3 randomizeCount = 0.33 randomizeEvents = true token.0.token = ((\w+\s+\d+\s+\d{2}:\d{2}:\d{2}:\d{3})|(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}:\d{3})) token.0.replacementType = replaytimestamp token.0.replacement = ["%b %d %H:%M:%S:%f", "%Y-%m-%d %H:%M:%S:%f"] token.1.token = (5\.5\.5\.5) token.1.replacementType = file token.1.replacement = $SPLUNK_HOME/etc/apps/destinations/samples/external_ips.sample token.2.token = (10\.2\.1\.35) token.2.replacementType = file token.2.replacement = $SPLUNK_HOME/etc/apps/destinations/samples/webhosts.sample token.3.token = (Method-And-URI) token.3.replacementType = file token.3.replacement = $SPLUNK_HOME/etc/apps/destinations/samples/destinations-uris.sample token.4.token = (User-Agent) token.4.replacementType = file token.4.replacement = $SPLUNK_HOME/etc/apps/destinations/samples/useragents_desktop.sample token.5.token = (468) token.5.replacementType = random token.5.replacement = integer[100:1000] token.6.token = (1488) token.6.replacementType = random token.6.replacement = integer[200:4000]])" token.7.token = (200) token.7.replacementType = file token.7.replacement = $SPLUNK_HOME/etc/apps/destinations/samples/destinations-codes.sample ------------ After all these steps, i have restared the Splunk, could you possibly tell me where i am going wrong. Thanks in advance! @Penkov  @harsmarvania57  @naidusadanala    
Hey Everyone, Everyday Splunk is ingesting a csv of information, and we are doing charts to show when/how they changed.  The table consist if "Month","Project_Name", "Status", "Resolution", "Poin... See more...
Hey Everyone, Everyday Splunk is ingesting a csv of information, and we are doing charts to show when/how they changed.  The table consist if "Month","Project_Name", "Status", "Resolution", "Points", and of course "_time". So we are trying to see that if the status and resolution changes then the project_name gets points, and visually its just a stacked bar chart where the "Total Points" column stays the same but the "Gained Points" column grows over the course of the month. Basically if we have duplicate Project_Name that vary in Status and Resolution, how do we only show the row with the most recent _time  Example:  Month Project_Name Status Resolution Points _time 1 Project_Dog Open Open 2 2020-08-11 1 Project_Dog Open Open 2 2020-08-12  1 Project_Dog Done Done 2 2020-08-13  1 Project_Bird Open Open 1 2020-08-12 1 Project_Cat Open Open 3 2020-08-12 1 Project_Cat Done Done 3 2020-08-13  1 Project_Bird Open Open 1  2020-08-13  According to this example, Project_Dog gained 2 points for Month 1, and Project_Cat gained 3 points for Month 1. How do I get this example to show that Total Points = 6 and Gained Points = 5? 
We're thinking of leveraging the approach listed here ( https://apisero.com/recipe-to-implement-splunk-enterprise-on-premise-for-mulesoft-application-using-anypoint-studio-and-anypoint-platform-runti... See more...
We're thinking of leveraging the approach listed here ( https://apisero.com/recipe-to-implement-splunk-enterprise-on-premise-for-mulesoft-application-using-anypoint-studio-and-anypoint-platform-runtime-manager/ ) to ingest the Mulesoft logs since the Splunk HEC sounds like an ideal option. Does anyone else have other suggestions?