All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Not sure this goes here... but. We have 300+ applications which can all make calls to remote services - we are in the process of updating a particular external service and I'm trying to use AppD to ... See more...
Not sure this goes here... but. We have 300+ applications which can all make calls to remote services - we are in the process of updating a particular external service and I'm trying to use AppD to determine which applications are either pointing to the new service, or are still pointing to the old service. Manually, I would go to the Application->Remote Services and check to see on each app what is listed as a Remote Service - but what I want is a way to create a list or call an API which will provide this for me. If there isn't a single place, I can write a script to iterate over each Application - but, I am not seeing in the API where to get the Remote Services data - does anyone have a solution for this?
Hello,   I would like to add values from a search in one index and then to the result of another search from a different index to sum the results Here is one search: index=xxxxx_network_xxxxx... See more...
Hello,   I would like to add values from a search in one index and then to the result of another search from a different index to sum the results Here is one search: index=xxxxx_network_xxxxx | dedup host | stats count(host) as network And here is another one: index=xxxx_server_xxxxx | dedup host | stats count(host) as server I need the value from network + server Any ideas how to do this in one search for implementing a Dashboard? Thanks,
i have a need to search the HWF for the apps that are currently used frequently and also which apps are sending data to indexers.    Context - Upgrade readiness app has identified several apps th... See more...
i have a need to search the HWF for the apps that are currently used frequently and also which apps are sending data to indexers.    Context - Upgrade readiness app has identified several apps that are not supported or in need of upgrade. Need to see if these apps are needed any longer and can be removed or truly need to be upgraded prior to the Splunk version upgrade of the HWF. 
Hello, I have the request which normally show 4 rows, I need to display only  one row with only the Status column. index=traderestitution source="web_ping://*" |stats count(eval(response_cod... See more...
Hello, I have the request which normally show 4 rows, I need to display only  one row with only the Status column. index=traderestitution source="web_ping://*" |stats count(eval(response_code="200")) as count_status | eval Status=if(count_status!="4","NOK","OK") Here the result of the request. Does someone can help?  
Is there a way that I can modify the categories shown in the default Triggered Alerts page? It currently only shows the default Time, Fired alerts, App, Type, Severity, and Mode categories. I want it... See more...
Is there a way that I can modify the categories shown in the default Triggered Alerts page? It currently only shows the default Time, Fired alerts, App, Type, Severity, and Mode categories. I want it to show ComputerName or something alike so I don't have to click into each alert to see what system the alert is for. 
Background information In our system, every visit consists of one or more actions. Every action has its own name and in Splunk it's a field named "transId". Every time an action is triggered, it has... See more...
Background information In our system, every visit consists of one or more actions. Every action has its own name and in Splunk it's a field named "transId". Every time an action is triggered, it has a unique sequence and in Splunk it's a field named "gsn". A customer has his unique id and in Splunk it's a field named "uid". During the period of a customer visit our system, he has a unique session id and in Splunk it's a field named "sessionId". If we want to locate a complete operation of a user, we need to use uid and sessionId together. Like many other systems, the order of actions in our system is fixed, under normal circumstances. What we want We want to create an alter to monitor the abnormal order of actions. For example, an important action named "D", it is at the end of an action-chain. Under normal circumstances, you must  access our system by the order of actions "A B C D". But some hackers may skip the trans B, which may be an action that verify his identity. The problem is that I don't know the command to get abnormal results. We can accept that we need to input the order of actions for every action-chain. It's better to read the order by configuration file. What I've tried   | stats count by sessionId uid transId gsn _time | sort 0 sessionId uid _time   I can get every use's order of actions by this command. Can you give me some advice? If you want to get more information, you can ask me here. Best wishes!
  | lookup local=true ipasncidr_def CIDR as dest_ip output Organization | lookup src_eonid_name.csv SRC_EONID OUTPUT "SRC_EONID NAME" | top limit=3 "SRC_EONID NAME", dest_ip, dest_port, servicenow, ... See more...
  | lookup local=true ipasncidr_def CIDR as dest_ip output Organization | lookup src_eonid_name.csv SRC_EONID OUTPUT "SRC_EONID NAME" | top limit=3 "SRC_EONID NAME", dest_ip, dest_port, servicenow, Organization by SRC_EONID | stats values(dest_ip) as "Destination" dc(dest_ip) as "Destination IP count" values(Organization) as "Organization" values(dest_port) as "Dest Ports" values(servicenow) as "Service now Tickets" by "SRC_EONID NAME" SRC_EONID   Hi everyone. So I have this query that is being saved as dashboard(statistical table.). Is there a way I can include percentages for all the fields captured in the statistical table.     
I followed the steps under "Migrate the KV store after an upgrade to Splunk Enterprise 8.1 or higher in a single-instance deployment"  on this document https://docs.splunk.com/Documentation/Splunk/8.... See more...
I followed the steps under "Migrate the KV store after an upgrade to Splunk Enterprise 8.1 or higher in a single-instance deployment"  on this document https://docs.splunk.com/Documentation/Splunk/8.2.4/Admin/MigrateKVstore#Migrate_the_KV_store_after_an_upgrade_to_Splunk_Enterprise_8.1_or_higher_in_a_single-instance_deployment   When I type in the command ./splunk splunk migrate kvstore-storage-engine --target-engine wiredTiger I get an error that the SplunkD service is not running.  I then changed the command to ./splunk migrate kvstore-storage-engine --target-engine wiredTiger (removing the first "splunk" word in the command) and no status prompt after entering. I then checked the mongod.log file and it shows no updates in the file. I know Windows can be a bit finicky with Splunk but is there something that I am missing to get this migration from mmapv1 to wiredTiger working?
Hello, I see here in the ITSI manual that bidirectional integration exists OOTB for ServiceNow and Splunk Infrastructure Monitoring.  I would like the same functionality for Microsoft DevOps.  Is th... See more...
Hello, I see here in the ITSI manual that bidirectional integration exists OOTB for ServiceNow and Splunk Infrastructure Monitoring.  I would like the same functionality for Microsoft DevOps.  Is this possible?  Are there any frameworks or templates within ITSI that facilitate the creation of this type of integration? Thanks! Andrew  
Greetings folks. I installed the TA-ms-teams-alert-action to... you probably guessed... send alert messages to Teams. After installation exactly two messages were sent successfully to Teams. I even ... See more...
Greetings folks. I installed the TA-ms-teams-alert-action to... you probably guessed... send alert messages to Teams. After installation exactly two messages were sent successfully to Teams. I even took screenshots. I recently realized I had not received any messages for events that I knew had happened so I started digging. Looks like a lot of messages are stuck in a resending state. Further digging in the logs indicates that when the TA tried to send a message to the Teams webhook it received a 404: 2022-04-06 00:35:45,922 ERROR pid=123018 tid=MainThread file=cim_actions.py:message:280 | sendmodaction - signature="Microsoft Teams publish to channel has failed!. url=https://totallyvalid.webhook.office.com/webhookb2/XXXXX , data={ }, HTTP Error=404, HTTP Reason=Not Found, HTTP content=<!DOCTYPE html> <span><H1>Server Error in '/WebhookB2' Application.<hr width=100% size=1 color=silver></H1> <h2> <i>The resource cannot be found.</i> </h2></span> <font face="Arial, Helvetica, Geneva, SunSans-Regular, sans-serif "> <b> Description: </b>HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. &nbsp;Please review the following URL and make sure that it is spelled correctly. <br><br> <b> Requested URL: </b>/webhookb2/XXXXX<br><br> I am unclear how to proceed. I've changed the web hook URLs above for privacy but the hooks in the logs and in the TA match the hooks in the Teams connector configuration. I know the webhooks work because they are in use by other tools and are not failing. I tested the webhooks from my laptop and was able to send a message. I tested the webhook from a search head and was able to send a message. Something appears to be munging the web hook URL but I cannot determine how or where. And since it worked previously and has not changed (I am the only person with access) I can't figure it out. I suspect that this line "Server Error in '/WebhookB2' Application." is relevant. This is on Splunk Enterprise 8.2.2.2. Thoughts or strategies would be appreciated.
The Monitoring Console is very slow, the Overview page takes five to load to report about 150 indexers. Any other page takes a couple of minutes to load. What can it be?
Hi Community, I have an SPL query that runs from a savedsearch in Splunk Enterprise. When I run the query I am able to get the output but when I try to run the same query from the Linux server using... See more...
Hi Community, I have an SPL query that runs from a savedsearch in Splunk Enterprise. When I run the query I am able to get the output but when I try to run the same query from the Linux server using a curl command I do not get any response. I have verified if the curl is able to connect to the API and obtain a response by getting the status code in the output. Example of the curl command: /usr/bin/curl -sSku username:password https://splunk:8089/servicesNS/admin/search/search/jobs/export -d search="| savedSearch Trading_test host_token=MTE_MTEDEV_Greening time_token.earliest=-30d time_token.latest=now " -d output_mode=csv -d exec_mode=oneshot > output.csv I was trying to break the problem to check where it might have gone wrong. I found that the savedsearch I was using had a table command to limit the number of columns generated. So I created a new savedsearch to without any tables and I was able to get the output as raw data. This is such unusual behaviour that I am not able to figure out what would have gone wrong. Could anyone let me know why is this causing a problem? Are there some other alternatives that I can use to fix this problem? Thanks in advance.   Regards, Pravin      
I am checking the upgrade readiness of my Splunk Apps under ES search-head.  (we have SearchHead cluster and changes are pushed from there, and I am told the changed once pushed flattens both local a... See more...
I am checking the upgrade readiness of my Splunk Apps under ES search-head.  (we have SearchHead cluster and changes are pushed from there, and I am told the changed once pushed flattens both local and default folder) While checking I also want to know if there are any custom changes made to any app. If any custom changes I need to inform the respective owner to fix it before updating the app to make it py2 and py3 compatible. Now, I don't have access to any file system. I am told by looking at btool debug logs, I can find out if any custom changes are applied to any app. I just don't know what specific in btool log will give it away if any custom change is done ? What exactly to look for in btool to know that ? Any  assistance here ?
I configured the app however it keeps returning to the setup page.  Easy fix.  Also, I have the ssl_check3.py script work fine and its pulling cert info as expected however the manual (ssl_checker2.p... See more...
I configured the app however it keeps returning to the setup page.  Easy fix.  Also, I have the ssl_check3.py script work fine and its pulling cert info as expected however the manual (ssl_checker2.py) is failing. I deployed the app via the Deployment server so there is no "local" folder, so no local/ssl.conf either.  I looked at the ssl_check2.py and it looks like its also looking for defaul/ssl.conf however when I manually run the script it returns the error " No such file or directory: '/opt/splunk/etc/apps/ssl_checker/bin/../local/ssl.conf' ".   I tried, just for testing, to create a local/ssl.conf and it returned this error " 'str' object has no attribute 'decode' ".  It also created an empty ssl.conf_tmp in local which I assume is a result of the above error?
Background information In our system, every visit consist of one or more actions. Every action has its name and in Splunk it's a field named "transId". Every time an action triggered, it has an uniq... See more...
Background information In our system, every visit consist of one or more actions. Every action has its name and in Splunk it's a field named "transId". Every time an action triggered, it has an unique sequence and in Splunk it's a field named "gsn". A customer has its unique id and in Splunk it's a field named "uid". During the period of a customer visit our system, he has an unique session id and in Splunk it's a field named "sessionId". If we want to locate a complete operation of a user, we need to use uid and sessionId together. Like many other systems, the order of actions in our system is fixed, under normal circumstances. What we want We want to create an alter to monitor the abnormal order of actions. For example, an important action named "D", it is at the last of an action-chain. Under normal circumstances, you must  access our system by the order of actions "A B C D". But some hackers my skip the trans B, which may be an action that verify his identity. The problem is I don't know the command to get abnormal results. We can accept that we need to input the order of actions for every action-chain. It's better to read the order by configuration file. What I've tried   | stats count by sessionId uid transId gsn _time | sort 0 sessionId uid _time   I can get every use's order of actions by this command. Can you give me some advice? If you want to get more information, you can ask me here. Best wishes!
Hi Everyone,   below is my query to use thousand comma separator: |inputlookup abc.csv | chart sum(field1) as field1 by field2, field3| addtotals | fieldformat/eval = tostring(field1, "commas")... See more...
Hi Everyone,   below is my query to use thousand comma separator: |inputlookup abc.csv | chart sum(field1) as field1 by field2, field3| addtotals | fieldformat/eval = tostring(field1, "commas").   in the result I am not getting commas in the field1 value. If I alter my query with only 1 field -> field2 or field3 then I get expected result. but I want sum of field by field 2 and field 3. can someone help me with this issue? Thanks, ND.
I have a record that results because it matches a particular sub string. Now, I want to extract the whole string the substring is part of. For ex. I give process completed as the sub string with my ... See more...
I have a record that results because it matches a particular sub string. Now, I want to extract the whole string the substring is part of. For ex. I give process completed as the sub string with my query which results in a record. This record has Takeover process completed with 390 files. Now, I want to get the whole Takeover process completed with 390 files string. How do I do this? Can somebody please help.
I want to find the difference between the maximum value and the minimum value in the multi-value field that has been lumped together with the transaction command. Specifically, group the web access... See more...
I want to find the difference between the maximum value and the minimum value in the multi-value field that has been lumped together with the transaction command. Specifically, group the web access logs by ID, and then I would like to find the time that the ID was operating from login to operation to logout. Do you have an idea for SPL?
Hi, I have multiple fields like, counting how many items passing through gates:       | timechart count(eval(like(gate_id, "RG%") )) as items_RG, count(eval(NOT like(gate_id, "RG%") )) as a... See more...
Hi, I have multiple fields like, counting how many items passing through gates:       | timechart count(eval(like(gate_id, "RG%") )) as items_RG, count(eval(NOT like(gate_id, "RG%") )) as all_items by building        I want to exclude the counts of items_RG from the all_items, so I'm using :       | eval Total=all_items-items_RG       But it is not showing Total in the output, but when I use stats instead, I don't get the time column to show the graph as timechart.       | stats count(eval(like(gate_id, "RG%") )) as items_RG, count(eval(NOT like(gate_id, "RG%") )) as all_items by building | eval Total=all_items-items_RG       I tried to use eventstats also couldn't get what I want.
        I have to extract the highlighted value as a single field in splunk. Any help.