All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I don't have a preference on which to use, I just need to be able to use the lookup efficiently for this search. It seems that WHERE and IN are not the correct clauses to use when using lookups? I am... See more...
I don't have a preference on which to use, I just need to be able to use the lookup efficiently for this search. It seems that WHERE and IN are not the correct clauses to use when using lookups? I am not sure
That might be a bit wrongly worded but it's used here for example https://docs.splunk.com/Documentation/Splunk/9.1.2/Forwarding/Routeandfilterdatad#Keep_specific_events_and_discard_the_rest If you ... See more...
That might be a bit wrongly worded but it's used here for example https://docs.splunk.com/Documentation/Splunk/9.1.2/Forwarding/Routeandfilterdatad#Keep_specific_events_and_discard_the_rest If you look at Masa diagrams you'll see which one the indexQueue is By default splunktcp input routes events into different queues depending on which keys are present in the data so if the data is not parsed, it's getting into the parsingQueue and so on. Check the system/default/inputs.conf
Can this be done as Feature request, we are Control center IOT team in Cisco, we can coordinate on explaining the usecases. We need it to be in preconfigured for all the java agents as we have multi... See more...
Can this be done as Feature request, we are Control center IOT team in Cisco, we can coordinate on explaining the usecases. We need it to be in preconfigured for all the java agents as we have multiple environments and don't want to create everytime for each instrumented application.
We are in the process of generating Events in ServiceNow using the Splunk add-on for ServiceNow.  We are passing Event information in the description field to communicate to the end user what actions... See more...
We are in the process of generating Events in ServiceNow using the Splunk add-on for ServiceNow.  We are passing Event information in the description field to communicate to the end user what actions need to be addressed.  As part of the output we want to include a table of information that summarizes the events detected.  We are able to aggregate and group the information as necessary, just having a hard time establishing a pattern where we can consistently control the output.    We have had issues formatting the data and we are seeking guidance on how we can exert greater control over the format.  We would like to include a brief sentence with instructions on how to move forward and we would like to identify all events impacted in table format.    |eval instructions = "The message we are seeking would look like the content below:  The header column and the output needs to be aligned and easy to read for the end user.    I have used a MVAppend Statement to add the header to a column, but could not concatenate the information in a manner where it display the information in a table format.   "  . " " | eval cheader = "Host Account Action " | eval tabledata= host . " " . Account . " " . Action | eval instructions = instructions . cheader . tabledata   "The account is a controlled account and you will need to provide justification for accessing the account outside of security controls.  Please review the table of events and provide insight into why control was violated." Table of Events:   Host                      Account           Action     LC200506         admin                Success  LC200507         admin                Failure    
Hi @Yogesh.Joshi, Please check out this documentation and see if it helps: https://docs.appdynamics.com/appd/22.x/latest/en/analytics/configure-analytics/collect-log-analytics-data
Hi @Sikha.Singh, Here is what I found on Backup and restoring on our AppD Docs site: https://docs.appdynamics.com/appd/onprem/latest/en/plan-your-deployment/physical-machine-controller-deployment-g... See more...
Hi @Sikha.Singh, Here is what I found on Backup and restoring on our AppD Docs site: https://docs.appdynamics.com/appd/onprem/latest/en/plan-your-deployment/physical-machine-controller-deployment-guide/controller-data-and-backups/controller-data-backup-and-restore
I have a panel in a dashboard that plot a trend line for last 24 Hrs. Now I wanna create a new alert query that should follow the trendline of panel. If the output of alert query doesn't match (no... See more...
I have a panel in a dashboard that plot a trend line for last 24 Hrs. Now I wanna create a new alert query that should follow the trendline of panel. If the output of alert query doesn't match (not exactly but to an extent) the pattern of panel query then it should trigger an alert. 
Can you tell us more about this setting?  The inputs.conf.spec file says setting the value to "indexQueue" sends data 'directly into the index', implying no parsing is done (is that even possible?). ... See more...
Can you tell us more about this setting?  The inputs.conf.spec file says setting the value to "indexQueue" sends data 'directly into the index', implying no parsing is done (is that even possible?).  Under what conditions would we use indexQueue?
I've recently been advised that our organization is intending to do away with the production domain where our current Splunk cluster resides, and move everything over two the other domain in use. Thi... See more...
I've recently been advised that our organization is intending to do away with the production domain where our current Splunk cluster resides, and move everything over two the other domain in use. This implementation does currently have nodes in two different domains, and the domain to go away happens to house both our Cluster Manager and four indexers in a two-site configuration running Splunk Enterprise 9.1.1. I don't yet have all the details (ie, is the IP/hostname changing or not) but in an effort to do some pre-emptive housecleaning and change the 'serverName' on one of the indexers in advance to go from FQDN to just the hostname, I got CM complaints that it couldn't rejoin the cluster due to the GUID belonging to another indexer.   01-16-2024 13:43:03.307 +0000 ERROR ClusterMasterPeerHandler [25028 TcpChannelThread] - Cannot add peer=X.X.X.X mgmtport=8089 (reason: Peer with guid=<GUID> is already registered and UP).   This error feels a little bit like a chicken/egg situation. Essentially I just had put the CM into maintenance-mode, stopped the peer, updated serverName in server.conf and started it back up. Perhaps I should have used 'splunk offline' vs 'splunk stop' here? This has me thinking the operation we're about to undertake is a fairly complex one. I haven't been able to find any relatively recent posts about doing something similar aside from a 2016 blog post that makes no mention of GUID and presume it was referring to stand-alone indexers vs clustered. Changing the GUID is presumably a non-starter due to the existing buckets all referencing it in their names... Long story short, I'm looking for an order of operations and some dos/donts for an undertaking like this.
Hi @Abdulrahman.Kazamel, Thanks for sharing your solution!
We were able to resolve this issue by editing the UI file so that the missing Teams Call QoS page would display.  Settings > User Interface > Navigation Menus > microsoft_cloud_app The Call Record ... See more...
We were able to resolve this issue by editing the UI file so that the missing Teams Call QoS page would display.  Settings > User Interface > Navigation Menus > microsoft_cloud_app The Call Record Monitoring collection was commented out.
I have to trim ITSI KV store collection size. I have created a local itsi_notable_event_retention.conf file in $SPLUNK_HOME/etc/apps/SA-ITOA/local/. I override the default values of retentionTimeInSe... See more...
I have to trim ITSI KV store collection size. I have created a local itsi_notable_event_retention.conf file in $SPLUNK_HOME/etc/apps/SA-ITOA/local/. I override the default values of retentionTimeInSec to 3 months. However the no of objects in the collection are still growing and hence the collection size. How do I trim the collection size?  I followed this document Modify notable event KV store collections in ITSI - Splunk Documentation. Please assist.
If you're using the IN clause you need to provide, as the message says, a list of literals. So you should do something IN ("val1", "val2", "val3) You can't give a list of conditions as you apparent... See more...
If you're using the IN clause you need to provide, as the message says, a list of literals. So you should do something IN ("val1", "val2", "val3) You can't give a list of conditions as you apparently tried to do
No. It can be a bit misleading but it shows that TLS isn't properly configured on this port. With TLS you should have gotten a server certificate and all the gory encryption protocols details. Also ... See more...
No. It can be a bit misleading but it shows that TLS isn't properly configured on this port. With TLS you should have gotten a server certificate and all the gory encryption protocols details. Also as you noticed yourself in the other comment - you can properly call curl requesting a simple non-encrypted http:// resource. Since Splunk doesn't serve both TLS-enabled and not-enabled services on the same port, it means you simply have to configure it.
1. General remark - don't use wildcards at the beginning of your search term if you can avoid it. If you do that, Splunk has to search all events from the given timerange to verify if any events matc... See more...
1. General remark - don't use wildcards at the beginning of your search term if you can avoid it. If you do that, Splunk has to search all events from the given timerange to verify if any events match this condition - it can't use its indexed database of terms. In your case the difference might be insignificant if the data sample is small but as a general rule it's good to know that because with a big search it can make a huge difference in performance. 2. I'm not sure what you want to achieve. If you want to return only the results which match particular field value from your lookup, you have to return it from the lookup and then filter results based on values of this field. So if you want to filter your results to only return values for particular values of app-name column in your lookup (which you return from the lookup as the application field) you need to do index=summary type=isilon_nfs-quota-alert (path="*appsdata*") | lookup apps-nfs.csv nfs-name as path output nfs-name as found, app-name as application | search application IN ("app1", "app2", [...]) (you can also use the "where" command to filter your results instead of "search").
Hi @Shwetha.Gattu, I reached out to Atyuha and they mentioned the way they suggested is still the only way. 
Hi @Dishant.Mokal, Did you have any luck with finding a solution?
Well, the answer can be "because that's just how Splunk works". And it's important to know that. If you're matching a field value to something (even if it's a wildcard meaning any value), the field ... See more...
Well, the answer can be "because that's just how Splunk works". And it's important to know that. If you're matching a field value to something (even if it's a wildcard meaning any value), the field must exist. If the field does not exist for a given event (which can be also interpreted as the field having null value), it will not match. It is important because it can cause unexpected results if you're trying to match your events wrongly (because as you noticed, field=* does not mean "regardles of whethere there is any value in that field but means "if there is a value in that field, regardless of what it is - a subtle, yet significant difference). It also means that matching for field!=value is not the same as matching "NOT field=value". The first condition means that there must be the field and its value must be different from the specified value (which means that the condition won't match an event where there is no field at all!) and the second condition means that either there is a field which has another value or there isn't such field at all. It's important to learn this distinction because it can make all the difference between searching and finding the right stuff and not finding it.
Here is the answer on why it's happening. https://community.splunk.com/t5/Splunk-Enterprise/Encountered-Streaming-S2S-error-Too-many-fields/m-p/674160
Hello, I am adding an Alert Action with Splunk Add-on Builder, but when I click “save” it basically goes in timeout.     01-16-2024 17:01:31.340 +0100 ERROR HttpClientRequest [24831 TcpChanne... See more...
Hello, I am adding an Alert Action with Splunk Add-on Builder, but when I click “save” it basically goes in timeout.     01-16-2024 17:01:31.340 +0100 ERROR HttpClientRequest [24831 TcpChannelThread] - HTTP client error=Read Timeout while accessing server=http://127.0.0.1:8065 for request=http://127.0.0.1:8065/en-US/custom/splunk_app_addon-builder/app_edit_modularalert/add_modular_alert.   In the meanwhile if I open a new tab on the browser, whichever page I request falls in timeout as well.     01-16-2024 17:02:18.114 +0100 ERROR HttpClientRequest [7954 TcpChannelThread] - HTTP client error=Read Timeout while accessing server=http://127.0.0.1:8065 for request=http://127.0.0.1:8065/en-US.   Looking into the /opt/splunk/etc/apps folder it seems my app stuck in TA-splunk-myapp_temp_output folder meanwhile is saving. splunk@SearchHead:~/etc/apps > ls -latr drwxrwxrwx 10 splunk splunk 4096 Jan 15 16:02 TA-splunk-myapp … drwxrwxrwx 3 splunk splunk 4096 Jan 16 16:53 TA-splunk-myapp_temp_output   I also tried to: cancel the TA-splunk-myapp_temp_output folder, restart Splunk and try again saving. increase performance from 16CPU/32GB to 32CPU/64GB but I have the same issue. It seems that the timeout comes from the “appserver” that runs on port 8065. https://docs.splunk.com/Documentation/Splunk/latest/Admin/Webconf   appServerPorts = <positive integer>[, <positive integer>, <positive integer> ...] * Port number(s) for the python-based application server to listen on. This port is bound only on the loopback interface -- it is not exposed to the network at large. * Generally, you should only set one port number here. For most deployments a single application server won't be a performance bottleneck. However you can provide a comma-separated list of port numbers here and splunkd will start a load-balanced application server on each one. * At one time, setting this to zero indicated that the web service should be run in a legacy mode as a separate service, but as of Splunk 8.0 this is no longer supported. * Default: 8065   I am thinking about: Put the logs in DEBUG Adding other ports to start load-balanced application server   Any suggestion is really appreciated.   Thanks a lot, Edoardo