All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

index=app1 [search index=app1 "orderid"| fields id] How do I modify the above query wherein "search index=app1 "orderid"| fields id" query is run and its first event's time and the last event's tim... See more...
index=app1 [search index=app1 "orderid"| fields id] How do I modify the above query wherein "search index=app1 "orderid"| fields id" query is run and its first event's time and the last event's time is take as  earliest and latest time respectively for the query, "index=app1" thus it would look sometime like index=app1 earliest=x latest=y [search index=app1 "orderid"| fields id] wherein the values x and y is the first and last event's datetime of the query, "search index=app1 "orderid"| fields id" Thank you
On searching with the criteria, earliest="07/04/2021:09:48:00" latest="07/04/2021:09:48:59" searches in my local timezone of AEST and of the format %m/%d/%Y:%H:%M:%S How do I force the above to tak... See more...
On searching with the criteria, earliest="07/04/2021:09:48:00" latest="07/04/2021:09:48:59" searches in my local timezone of AEST and of the format %m/%d/%Y:%H:%M:%S How do I force the above to take UTC timezone instead as criteria and also of the format "yyyy-mm-ddThh:mm:ss.SSSZ"   Thank you
I am currently running an on-Prem Splunk installation and am trying to figure out the best approach for ingesting data from our VMware environment. Currently i've got just a very basic setup with s... See more...
I am currently running an on-Prem Splunk installation and am trying to figure out the best approach for ingesting data from our VMware environment. Currently i've got just a very basic setup with syslogs from my ESXi hosts and vCenter going to a Syslog server and monitored by a universal forwarder on the syslog server for forwarding to my indexers. That all works reasonably well, however would like to be able to use some pre-developed dashboards. I have noted that there are a number of VMware add-ons that can be used. Also we are looking to move towards ITSI in the near future so would like to amend the way i ingest WMware data so that it is compatible with this. At a fundamental level, I think i'm struggling with understanding the difference between "Splunk add-on for VMware" and "Splunk add-on for VMware metrics". Is there a reason why i would pick one of these over the other (or do you normally install both)?
Hi, I have a dashboard with a number of panels. However, some panels use the final answers from other panels as inputs for their panel's calculations. I find myself reusing a lot of the existing qu... See more...
Hi, I have a dashboard with a number of panels. However, some panels use the final answers from other panels as inputs for their panel's calculations. I find myself reusing a lot of the existing queries across a number of panels as a result. Is there a more inheritable way to pass output (be it a number) from one panel and make it accessible on another panel? Would tokens be an option or a global variable? Thanks, Patrick
We setup splunkd to autostart using systemd. -> https://docs.splunk.com/Documentation/Splunk/latest/Admin/RunSplunkassystemdservice but when the linux server reboot, we did no see Splunkd startin... See more...
We setup splunkd to autostart using systemd. -> https://docs.splunk.com/Documentation/Splunk/latest/Admin/RunSplunkassystemdservice but when the linux server reboot, we did no see Splunkd starting, we had to manually start it.
I have correlation searches in ES that are not generating notable events as they should be. When I click on content management and find a search that isn't working, it shows a green check mark next t... See more...
I have correlation searches in ES that are not generating notable events as they should be. When I click on content management and find a search that isn't working, it shows a green check mark next to the index but a red exclamation mark next to the sourcetype, saying that there have been no events in that sourcetype for the last 24 hours.  I want to know if this is the cause of my issue, and what I can do to troubleshoot it. I see there are events in the sourcetype/index specified, and they are visible in the search box of the ES app. 
Hello, I've been using DEV/TEST license for a while for a test splunk instance. The license had expired and I hadn't renew it for a while. Now I've requested a new license, I've applied it, but it ... See more...
Hello, I've been using DEV/TEST license for a while for a test splunk instance. The license had expired and I hadn't renew it for a while. Now I've requested a new license, I've applied it, but it is still not working. I'm getting this message in the Licensing menu: This deployment is subject to license enforcement. Search is disabled after 45 warnings over a 60-day window The new license status is: valid.  
Ever tried to assign a SplunkES Notable via Splunk SOAR to have it fail? So you also use centralized authentication such as Okta with your Splunk deployment? Here is what is happening. SplunkES uses... See more...
Ever tried to assign a SplunkES Notable via Splunk SOAR to have it fail? So you also use centralized authentication such as Okta with your Splunk deployment? Here is what is happening. SplunkES uses the list of users (cached from SSO and local) as seen in the Settings-Users to build the pull down for ES Notable assignment. This list also matters when assigning notables via the UI such as using Splunk SOAR. If your analyst has not accessed the SplunkES server at least once they won't show in the SSO cached users. The search that generates this list is `Threat - Notable Owners - Lookup Gen` So either make sure any analyst Splunk SOAR might assign a notable to logs into SplunkES at least once. OR make yourself a static lookup table of names and shim it into `Threat - Notable Owners - Lookup Gen` Just remember the lookup will need two columns; owner,realname.   A modified search might look like the following. | rest splunk_server=local count=0 /services/authentication/users | search capabilities="can_own_notable_events" | rename title as owner | append [| makeresults | eval owner="unassigned" ] | eval _key=owner | eval realname=if(isnull(realname) or realname="", null(), realname) | table _key owner realname | inputlookup append=true static_es_analysts_list | dedup owner | eval _key=owner | outputlookup notable_owners_lookup | stats count
 I see I don't have any ._*.xml files in my app, even though I have created a tar.gz file which excludes the local generated file. Still I am getting the error  Invalid xml detected in file defau... See more...
 I see I don't have any ._*.xml files in my app, even though I have created a tar.gz file which excludes the local generated file. Still I am getting the error  Invalid xml detected in file default/data/ui/views/._xmlFileName.xml at line 1   Could you pls let me know what is the workaround for the same. Thanks in advance.
Hello, This is my first time asking a question on here, so apologies if there's some format to follow. My work center doesn't have a Splunk Admin/Engineer, so they asked if I could try upgrading Sp... See more...
Hello, This is my first time asking a question on here, so apologies if there's some format to follow. My work center doesn't have a Splunk Admin/Engineer, so they asked if I could try upgrading Splunk since it's hosted on Linux and I'm a RHEL admin.  My concern is there are no clients (besides the HF) showing up under Forwarder Management on Splunk Web. Am I supposed to re-add all the clients again? Or should they have started to communicate regardless? I know the indexer is working since we can search the latest AWS logs. But any Windows/Linux box doesn't show up anymore. All apps and indexes are showing, just no "deployed clients" underneath them. The SH is the master.  Any help is greatly appreciated!
  I need to extract the Activity Score and Application UXI Average but only when the Application Name is a certain name.  It's a weird one for me because of the way data comes in. As you can... See more...
  I need to extract the Activity Score and Application UXI Average but only when the Application Name is a certain name.  It's a weird one for me because of the way data comes in. As you can see each event has multiple application names, activity scores, uxi averages and timeframes. So even when I specify for a certain app in a search, since the app name is in an event, I get the whole event which includes all the other apps and metrics. I hope what I'm explaining is clear and any help would be appreciated.   
Not sure this goes here... but. We have 300+ applications which can all make calls to remote services - we are in the process of updating a particular external service and I'm trying to use AppD to ... See more...
Not sure this goes here... but. We have 300+ applications which can all make calls to remote services - we are in the process of updating a particular external service and I'm trying to use AppD to determine which applications are either pointing to the new service, or are still pointing to the old service. Manually, I would go to the Application->Remote Services and check to see on each app what is listed as a Remote Service - but what I want is a way to create a list or call an API which will provide this for me. If there isn't a single place, I can write a script to iterate over each Application - but, I am not seeing in the API where to get the Remote Services data - does anyone have a solution for this?
Hello,   I would like to add values from a search in one index and then to the result of another search from a different index to sum the results Here is one search: index=xxxxx_network_xxxxx... See more...
Hello,   I would like to add values from a search in one index and then to the result of another search from a different index to sum the results Here is one search: index=xxxxx_network_xxxxx | dedup host | stats count(host) as network And here is another one: index=xxxx_server_xxxxx | dedup host | stats count(host) as server I need the value from network + server Any ideas how to do this in one search for implementing a Dashboard? Thanks,
i have a need to search the HWF for the apps that are currently used frequently and also which apps are sending data to indexers.    Context - Upgrade readiness app has identified several apps th... See more...
i have a need to search the HWF for the apps that are currently used frequently and also which apps are sending data to indexers.    Context - Upgrade readiness app has identified several apps that are not supported or in need of upgrade. Need to see if these apps are needed any longer and can be removed or truly need to be upgraded prior to the Splunk version upgrade of the HWF. 
Hello, I have the request which normally show 4 rows, I need to display only  one row with only the Status column. index=traderestitution source="web_ping://*" |stats count(eval(response_cod... See more...
Hello, I have the request which normally show 4 rows, I need to display only  one row with only the Status column. index=traderestitution source="web_ping://*" |stats count(eval(response_code="200")) as count_status | eval Status=if(count_status!="4","NOK","OK") Here the result of the request. Does someone can help?  
Is there a way that I can modify the categories shown in the default Triggered Alerts page? It currently only shows the default Time, Fired alerts, App, Type, Severity, and Mode categories. I want it... See more...
Is there a way that I can modify the categories shown in the default Triggered Alerts page? It currently only shows the default Time, Fired alerts, App, Type, Severity, and Mode categories. I want it to show ComputerName or something alike so I don't have to click into each alert to see what system the alert is for. 
Background information In our system, every visit consists of one or more actions. Every action has its own name and in Splunk it's a field named "transId". Every time an action is triggered, it has... See more...
Background information In our system, every visit consists of one or more actions. Every action has its own name and in Splunk it's a field named "transId". Every time an action is triggered, it has a unique sequence and in Splunk it's a field named "gsn". A customer has his unique id and in Splunk it's a field named "uid". During the period of a customer visit our system, he has a unique session id and in Splunk it's a field named "sessionId". If we want to locate a complete operation of a user, we need to use uid and sessionId together. Like many other systems, the order of actions in our system is fixed, under normal circumstances. What we want We want to create an alter to monitor the abnormal order of actions. For example, an important action named "D", it is at the end of an action-chain. Under normal circumstances, you must  access our system by the order of actions "A B C D". But some hackers may skip the trans B, which may be an action that verify his identity. The problem is that I don't know the command to get abnormal results. We can accept that we need to input the order of actions for every action-chain. It's better to read the order by configuration file. What I've tried   | stats count by sessionId uid transId gsn _time | sort 0 sessionId uid _time   I can get every use's order of actions by this command. Can you give me some advice? If you want to get more information, you can ask me here. Best wishes!
  | lookup local=true ipasncidr_def CIDR as dest_ip output Organization | lookup src_eonid_name.csv SRC_EONID OUTPUT "SRC_EONID NAME" | top limit=3 "SRC_EONID NAME", dest_ip, dest_port, servicenow, ... See more...
  | lookup local=true ipasncidr_def CIDR as dest_ip output Organization | lookup src_eonid_name.csv SRC_EONID OUTPUT "SRC_EONID NAME" | top limit=3 "SRC_EONID NAME", dest_ip, dest_port, servicenow, Organization by SRC_EONID | stats values(dest_ip) as "Destination" dc(dest_ip) as "Destination IP count" values(Organization) as "Organization" values(dest_port) as "Dest Ports" values(servicenow) as "Service now Tickets" by "SRC_EONID NAME" SRC_EONID   Hi everyone. So I have this query that is being saved as dashboard(statistical table.). Is there a way I can include percentages for all the fields captured in the statistical table.     
I followed the steps under "Migrate the KV store after an upgrade to Splunk Enterprise 8.1 or higher in a single-instance deployment"  on this document https://docs.splunk.com/Documentation/Splunk/8.... See more...
I followed the steps under "Migrate the KV store after an upgrade to Splunk Enterprise 8.1 or higher in a single-instance deployment"  on this document https://docs.splunk.com/Documentation/Splunk/8.2.4/Admin/MigrateKVstore#Migrate_the_KV_store_after_an_upgrade_to_Splunk_Enterprise_8.1_or_higher_in_a_single-instance_deployment   When I type in the command ./splunk splunk migrate kvstore-storage-engine --target-engine wiredTiger I get an error that the SplunkD service is not running.  I then changed the command to ./splunk migrate kvstore-storage-engine --target-engine wiredTiger (removing the first "splunk" word in the command) and no status prompt after entering. I then checked the mongod.log file and it shows no updates in the file. I know Windows can be a bit finicky with Splunk but is there something that I am missing to get this migration from mmapv1 to wiredTiger working?
Hello, I see here in the ITSI manual that bidirectional integration exists OOTB for ServiceNow and Splunk Infrastructure Monitoring.  I would like the same functionality for Microsoft DevOps.  Is th... See more...
Hello, I see here in the ITSI manual that bidirectional integration exists OOTB for ServiceNow and Splunk Infrastructure Monitoring.  I would like the same functionality for Microsoft DevOps.  Is this possible?  Are there any frameworks or templates within ITSI that facilitate the creation of this type of integration? Thanks! Andrew