All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

eval env= if(index="*non_prod*", "Non-Prod", "Prod") This won't work. At least not the way you want it to. Your condition tries to match the index to the literal value of *non_prod*. Since index na... See more...
eval env= if(index="*non_prod*", "Non-Prod", "Prod") This won't work. At least not the way you want it to. Your condition tries to match the index to the literal value of *non_prod*. Since index name cannot contain asterisks this condition will never evaluate to true. You need to use one of the other comparison functions - https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ConditionalFunctions Suitable candidates: like() match() searchmatch()  
Hello @kiran_panchavat , Thanks for your reply and confirming that Splunk Enterprise don't require this option. We can launch the Splunk Enterprise console through Search option from AppDynamics so ... See more...
Hello @kiran_panchavat , Thanks for your reply and confirming that Splunk Enterprise don't require this option. We can launch the Splunk Enterprise console through Search option from AppDynamics so the connection works well. Thanks again for your prompt reply. Kudos to you Regards, Selvaganesh E
OK. You can't visualize it like this without additional non-SPL logic (like custom JS in your dashboard). Apparently the colour of the grid cell depends on another factor (job status) which is not c... See more...
OK. You can't visualize it like this without additional non-SPL logic (like custom JS in your dashboard). Apparently the colour of the grid cell depends on another factor (job status) which is not contained within the cell itself. That's one thing. Two other things you're facing (but those can be solved with SPL) are: 1) You need to combine two values - start time and end time - into a single string value. Splunk cannot "merge cells" so you need to have a single value for a single grid cell. That's relatively easy. Just use concatenation on two string fields with a "\n" char to split the line in two or combine two values into multivalued field. 2) This is more tricky - you can "wrap" your data set to single days by means of timechart but you can only have one field to split your timechart by. So you can't do this timechart over both job _and_ country. You'd need to firstly combine both job and country into a single field to categorize your jobs, do a timechart over this field and finally split that field back again into two separate fields.
The general answer to questions like "how to find which hosts send to which indexes" is "you can't do that reliably". There are some things you can do to find info in some specific situations but the... See more...
The general answer to questions like "how to find which hosts send to which indexes" is "you can't do that reliably". There are some things you can do to find info in some specific situations but they will not cover all possible scenarios. 1. As @livehybrid already pointed out, you can try browsing through forwarders' metrics. There are two caveats here: - the metrics are limited to a fixed number of top data points so if your forwarder is sending to a huge number of different indexes you might not see that - events can be rerouted on HFs/indexers to different indexes that they were initially destined for 2. You can simply check the host field. But this is very unreliable technique and only works if you're capturing the events localy with the forwarder and not override the host in any way. 3. You can configure your environment (but this needs to be beforehand) so that forwarders add metadata to events by means of additional indexed fields or - for some types of sources - source field. This might get complicated and difficult to maintain if you don't use orchestration tools. And might have limitations if you're using multi-hop ingestion paths.
All I want to get from the subsearch is to bring back the field actions.  It can probably be a much smaller search.
Hello, We have separate indexes created for non-prod and prod.  Sample index name : sony_app_XXXXXX_non_prod - for non-prod env sony_app_XXXXXX_prod - for prod env XXXXX are Application ID numbe... See more...
Hello, We have separate indexes created for non-prod and prod.  Sample index name : sony_app_XXXXXX_non_prod - for non-prod env sony_app_XXXXXX_prod - for prod env XXXXX are Application ID numbers (different) and we have different indexes as well (along with non-prod and prod). I want a field called env which should pick index details like for all non-prod indexes, the env should be Non-Prod and for Prod indexes, env should be Prod. Given below command  index=sony*  |eval env= if(index="*non_prod*", "Non-Prod", "Prod"). This will not work for Prod because we have different indexes as well which not include either non_prod or prod. but it is giving all values as Prod in env.  Kindly help me with the solution to achieve this.  
Ok. You have two ends of the connection, don't try to fiddle with both of them at the same time. First, configure the receiving end (in your case - the indexer), when you have it working properly, s... See more...
Ok. You have two ends of the connection, don't try to fiddle with both of them at the same time. First, configure the receiving end (in your case - the indexer), when you have it working properly, start configuring the client (the UF). Your inputs.conf on the indexer looks OK. You should now be able to connect with openssl s_client -connect your_indexer:9997 and get a properly negotiated SSL connection (as long as your client trusts your indexer's cert issuer). If you're at this step, you can move forward. If at this step the connection is rejected by the indexer because you're not presenting a cert, there's something wrong with your indexer's configuration. If you have sslVerifyServerCert=false, you should not need any other parameters except useSSL=true because your UF will not be verifying the cert anyway. Remember to always check your configs with btool splunk btool check and splunk btool inputs list --debug splunk btool outputs list --debug
@SelvaganeshEThe add-on you are trying to use has been archived. I recommend checking this add-on for AppDynamics integration. https://splunkbase.splunk.com/app/3471 
@SelvaganeshE The "IP Allow List" feature is specific to Splunk Cloud and is not available in Splunk Enterprise (on-premise) deployments. For integrating Splunk Enterprise with AppDynamics SaaS, you ... See more...
@SelvaganeshE The "IP Allow List" feature is specific to Splunk Cloud and is not available in Splunk Enterprise (on-premise) deployments. For integrating Splunk Enterprise with AppDynamics SaaS, you might need to look into alternative methods for securing and managing access, such as configuring firewall rules or using other network security measures.
Hi everyone, I am testing the Smart Agent appdcli utility and encountered an issue. When I try to UPGRADE a machine agent by running the following command: appd upgrade machine --inventory hosts.i... See more...
Hi everyone, I am testing the Smart Agent appdcli utility and encountered an issue. When I try to UPGRADE a machine agent by running the following command: appd upgrade machine --inventory hosts.ini --connection ssh --config config.ini --version latest The agent starts communicating with the Controller, but ServerMonitoring fails (see figure_1).  However, when I try to INSTALL the same agent and version by the following command: appd install machine --inventory hosts.ini --connection ssh --config config.ini --version latest Everything works fine (see figure_2). Do you have any idea why?  The problem only appears when I upgrade a machine agent running on Linux (Ubuntu-23.10 [mantic]). On Windows, I have not encountered this issue. Regards, Lukas
Hello Team We try to integrate Splunk Enterprise (Version: 9.3.2) with AppDynamics SaaS (Version 24.10). As per the document, we need to add the AppD SaaS IP address in the Search head API in splunk... See more...
Hello Team We try to integrate Splunk Enterprise (Version: 9.3.2) with AppDynamics SaaS (Version 24.10). As per the document, we need to add the AppD SaaS IP address in the Search head API in splunk. To add the IP address, I have navigated to Server Settings but unable to see "IP Allow List" option in splunk console. Note: I have logged into Splunk with admin ID. Please help me to fix the issue. Thanks, Selvaganesh E
Hi @Dk123  Do you see other kvstore errors in splunkd.log and mongod.log?
Dashboard studio -Error while updating auto refresh value.  [Error: Visualization is not present in layout structure]: Visualization "viz_XQInZkvE" is not present in Layout Structure, My last panel... See more...
Dashboard studio -Error while updating auto refresh value.  [Error: Visualization is not present in layout structure]: Visualization "viz_XQInZkvE" is not present in Layout Structure, My last panel is: If i'm trying to change refresh rate from 2m to any other time i get above error. It looks like some default value or cloned from some other dashboard. Could somone help on this?  "title": "E2E Customer Migration Flow - MigrationEngine + NCRM Clone",     "description": "BPM Dashboard.SparkSupportGroup:Sky DE - Digital Sales - Product Selection & Registration",     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     },                     "refresh": "2m"                 }             }         }     } }
Please provide some sample events (anonymised appropriately) and a non-SPL description of what you are trying to achieve. It would also help to know what it is about your current search that does not... See more...
Please provide some sample events (anonymised appropriately) and a non-SPL description of what you are trying to achieve. It would also help to know what it is about your current search that does not provide the information you require.
This is 2017 post https://community.splunk.com/t5/Getting-Data-In/inputs-conf-and-outputs-conf-for-SSL-encryption/m-p/324308
When I asked this question, I had already added the following setting under [sslConfig] in both my Indexer and UF's server.conf: sslRootCAPath = /opt/splunkforwarder/etc/auth/mycerts/myCertAuthCerti... See more...
When I asked this question, I had already added the following setting under [sslConfig] in both my Indexer and UF's server.conf: sslRootCAPath = /opt/splunkforwarder/etc/auth/mycerts/myCertAuthCertificate.pem However, I still encountered the same issue as described in my original question.   Additionally, my Indexer's inputs.conf is configured as follows: [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/myCombinedServerCertificate.pem sslPassword = ServerCertPassword requireClientCert = false   I have followed Splunk's official documentation and tried various configurations, but all attempts failed. Then, I found a 2017 post on the Splunk Community forum and decided to try the suggested configuration. That configuration is exactly what I am using now, and it worked successfully. I don't fully understand this configuration, so I have asked these three questions.
someone to help me please on this subject ? ;(
@interrobang  How about something like this? index=_internal group=per_index_thruput series=* | bin _time span=10m | stats count by _time host | stats list(*) AS * by _time | table _time host co... See more...
@interrobang  How about something like this? index=_internal group=per_index_thruput series=* | bin _time span=10m | stats count by _time host | stats list(*) AS * by _time | table _time host count Which produces a table that looks like: Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Hi @tt-nexteng  Do you have requireClientCert set within your inputs.conf file on your receiving Splunk instance? sslCertPath in the outputs.conf is actually deprecated and clientCert should be spe... See more...
Hi @tt-nexteng  Do you have requireClientCert set within your inputs.conf file on your receiving Splunk instance? sslCertPath in the outputs.conf is actually deprecated and clientCert should be specified instead, although obviously this is only if you intend to use MutualAuth. sslRootCAPath in the outputs.conf is also deprecated and instead should be set in server.conf under the [sslConfig] stanza. Perhaps the CA isnt being picked up by the output processor and therefore it is using the combined cert you have specified in the sslCertPath. Try updating your server.conf/[sslConfig]/sslRootCAPath to your CA file and then try to see if this resolves the issue. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
The Jamf Pro Add-On for Splunk does not work with Splunk Cloud. We have spent days trying to get this working with both Jamf and Splunk, only to find that this setup is currently incompatible. This ... See more...
The Jamf Pro Add-On for Splunk does not work with Splunk Cloud. We have spent days trying to get this working with both Jamf and Splunk, only to find that this setup is currently incompatible. This has been confirmed by both Jamf and Splunk. It appears that the 'Jamf Protect Add-On' is compatible with Splunk Cloud. Hopefully these two add-ons are similar in construction and the Jamf Pro Add-On can be updated ASAP. https://splunkbase.splunk.com/app/4729 https://learn.jamf.com/en-US/bundle/technical-paper-splunk-current/page/Integrating_Splunk_with_Jamf_Pro.html Thanks!