All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Some of the logs coming from the Windows Universal Forwarder to Splunk show the following error in the message field for certain events: "Splunk could not get the description for this event.... See more...
Hello, Some of the logs coming from the Windows Universal Forwarder to Splunk show the following error in the message field for certain events: "Splunk could not get the description for this event." I have reviewed [https://community.splunk.com/t5/Getting-Data-In/Why-quot-FormatMessage-error-quot-appears-in-indexed-message-for/td-p/139980?_gl=1*1qz5els*_gcl_au*MjMzMzQwMzM3LjE3MjAzNTUzMDY.*FPAU*MjMzMzQwMzM3LjE3MjAzNTUzMDY.*_ga*MTExMjgzNTE2OC4xNzIwMzU1MzA2*_ga_5EPM2P39FV*MTcyNjY0MTY4NC4xNDQuMS4xNzI2NjQxODA1LjAuMC4xODMzMzUzNTE5*_fplc*T0RRcU4zWGc1THpWUFVQeVBxTTQ1T0JVanhWSVVpMmdLTVNzYjNMZSUyQjZUMXBEb0NsY3NTSm45MlpQaFVnbUtsR1MwQWdjdlVyM25peCUyRkozUnZmQ1UlMkJzUE9tSTBFd3kzbjV6diUyQmJoQzQxUlM5alphRUhIVXQ1V0I4M3hRZVElM0QlM0Q.] , but it doesn't solve the issue, as this problem only occurs for a few specific events at specific times. I am using Splunk version 9.2. What could be the issue?
Please help me with SPL for WHENEVER THERE IS ERROR OCCURED DURING MESSAGE EXCHANGE KEYWORD OCCURS AND REQ=INI didn't occur within few minutes raise and alert. 
I am unable to search my custom fields in Splunk after getting migrated index from normal to federated. do I have to change something in field extractions? or something wrong in migration
We have two separate splunk instances with ES (standalone not clustered) . Consider it as a HO DR   when i try to move to DR instance of splunk and copy /etc/apps , After restarting DR instance all... See more...
We have two separate splunk instances with ES (standalone not clustered) . Consider it as a HO DR   when i try to move to DR instance of splunk and copy /etc/apps , After restarting DR instance all the notables are in new status . Those notables which are closed in HO splunk is also showing as new. What could be the reason?   I do know that this is managed as a kv store. If we have to migrate KV store related to this. What are the best practises in this case    
I have indexer cluster setup with a load-balancer in front of  it.  I want the syslog to be ingested into the indexer. My plan is to install Universal forwarder on the Linux servers and send the sysl... See more...
I have indexer cluster setup with a load-balancer in front of  it.  I want the syslog to be ingested into the indexer. My plan is to install Universal forwarder on the Linux servers and send the syslog to Indexer clusters. Now the problem is how can i configure universal forwarder to hit load-balancer to ingest the data ?
Hi everyone, I’m trying to visualize the network traffic of an interface in Splunk using an area chart. However, the Y-axis scale is currently displaying as K, M, B (for thousand, million, billion),... See more...
Hi everyone, I’m trying to visualize the network traffic of an interface in Splunk using an area chart. However, the Y-axis scale is currently displaying as K, M, B (for thousand, million, billion), but I would like it to show K, M, G, T (for kilobytes, megabytes, gigabytes, terabytes). Is there a way to adjust this? Thanks!
When displaying the Choropleth Map on the dashboard, the painted area is collapsed. When I use the Search app to display visual effects, there is no problem. I was wondering if anyone has experienced... See more...
When displaying the Choropleth Map on the dashboard, the painted area is collapsed. When I use the Search app to display visual effects, there is no problem. I was wondering if anyone has experienced the same problem or has any ideas on how to solve it. Splunk Enterprise 9.3.1 renders fine in Dashboard Classic, but the issue occurs in Dashboard Studio. コロプレスマップをダッシュボード表示するとペイントされるエリアが崩れます。 Search appで視覚エフェクト表示した際には問題ありません。同じ事象を経験の方や解決の糸口をお持ちの方いらっしゃいませんでしょうか。 Splunk Enterprise 9.3.1ではDashboard Classicでは正常にレンダリングでき、Dashboard Studioでは事象が発生しています。 Choropleth Map in Search app Choropleth Map in Dashboard Studio Choropleth Map in Dashboard Classic Thanks,
I am looking for a visualization mechanism to colorize slices of a pie by their status OK (green), Warning (yellow), Major (orange), Critical (red). All of the pie chart viz examples I have seen a... See more...
I am looking for a visualization mechanism to colorize slices of a pie by their status OK (green), Warning (yellow), Major (orange), Critical (red). All of the pie chart viz examples I have seen are ranked by count of some category, and I want to rank by status.  In the example below, I have 4 groups of services, each with a number of service instances providing service up to a maximum number defined for the group.  I would like to visually see a group NofM colored by status and not ranked by count. Any ideas on where to go?  The pie chart viz is ruled out per the above (I think).  I looked for other visualizations such as the starburst,  but it didn't present the way I wanted to. Example SPL: | makeresults | eval dbs = "[{\"entity\":\"I0\",\"instanceCount\":\"0\",\"instanceMax\":\"3\"},{\"entity\":\"I1\",\"instanceCount\":\"1\",\"instanceMax\":\"3\"},{\"entity\":\"I2\",\"instanceCount\":\"2\",\"instanceMax\":\"3\"},{\"entity\":\"I3\",\"instanceCount\":\"3\",\"instanceMax\":\"3\"}]" | spath input=dbs path={} output=dblist | mvexpand dblist | spath input=dblist | eval pct_avail=round(100*instanceCount/instanceMax,1) | eval status=case(pct_avail=100, "OK", pct_avail>=50, "Warning", pct_avail>1, "Major", true(), "Critical") | eval color=case( status="Critical", "#FF0000", status="Major", "#D94E17", status="Warning", "#CBA700", status="OK", "#118832", true(), "#1182F3" ) | stats count by entity  
Imagine, if you will, table view lookup that has been setup to pull the Host name, the environment (Dev/Test/Prod) and the Server type (Database, Web App, SSO, etc...) and the application the server ... See more...
Imagine, if you will, table view lookup that has been setup to pull the Host name, the environment (Dev/Test/Prod) and the Server type (Database, Web App, SSO, etc...) and the application the server supports. I have 4 Input field LOVs setup. 1. Enclave...lets me choose Dev / Test / Prod, those are the only 3 options and the token name is "enclave" 2. Type...shows Database, Web App, SSO, Other ... again those are the only options, token name is "type" 3. Application...say HR, Payroll, Order Entry and HealthCare ... again, 4 options, token name is "app" 4.  This should be a DYNAMIC LOV that shows only the servers in the table view lookup that meet the condition set by the first 3 LOVs. ...example Enclave set to Dev, Type set to Web App, Application set to HR.  My table view clearly shows there are 2 web app server names so the 4th LOV should show Server003, Server007, All.  The token would then be set based on the choice (003 or 007) and if "All" were picked the token would be Server003, Server007.  This would drive the panel searches.   Is this possible?  I can get the 4th LOV to run but it doesn't give me a list.
I wanted to index the span tag "error" to be able to filter spans by this tag and to create alerts based on this tag. I tried to add a custom MetricSet. Unfortunately, after I start the analysis, I d... See more...
I wanted to index the span tag "error" to be able to filter spans by this tag and to create alerts based on this tag. I tried to add a custom MetricSet. Unfortunately, after I start the analysis, I don't see the check mark action to activate my new MetricSet:   I have followed the instructions on this page:  https://docs.splunk.com/observability/en/apm/span-tags/index-span-tags.html#index-a-new-span-tag-or-process
Hello everyone, New and trying to learn, I've searched for hours trying to get a dashboard to display computers within my domain and if they are online or not with a time associated. the time associ... See more...
Hello everyone, New and trying to learn, I've searched for hours trying to get a dashboard to display computers within my domain and if they are online or not with a time associated. the time associated with being up or down isn't important, just a nicety. 
I have about 100 servers.  These are a mix of different Oracle servers, Databases, Web Apps Servers, Data Warehouse servers, SSO Servers and OBIEE servers.  Of these, there is also the standard Dev/T... See more...
I have about 100 servers.  These are a mix of different Oracle servers, Databases, Web Apps Servers, Data Warehouse servers, SSO Servers and OBIEE servers.  Of these, there is also the standard Dev/Test/Prod environments and this is all supporting 5 different development / sustainment projects. A request was made to our Splunk Admin in the form of the Server name and all of the log files our engineer could think of at the time.  It appears the Splunk Admin just crammed everything into a single index.  Literally hundreds of log files as each server appeared to have 10-15 log files identified. Given the servers do different things, the request didn't necessarily have the same log files identified for every server.  I would have "expected" the request would have been vetted to answer "What do you really need?" rather than "HERE YOU GO!"  Maybe I've done Software Development too long, it could be me. Anyway, was this the right way to go?  Would it have made more sense to have 1 index for the Database Servers, 1 index for the Web App Servers, 1 index for the Data Warehouse, etc...?  Or, perhaps 1 index for the Production assets and 1 for Test and 1 for Dev? There doesn't appear to be a "best practice" that I can find...and what I have is ONE FREAKING HUGE index.   If you read this far, thanks.  If you have a cogent answer that makes sense to me, even better!  
I have a sample log, how do I create line breaking in props.conf on the indexers so that splunk can recognize the header (###) as the first line of the event message   sample log   ##############... See more...
I have a sample log, how do I create line breaking in props.conf on the indexers so that splunk can recognize the header (###) as the first line of the event message   sample log   ######################################################################## Thu 05/02/2024 - 8:06:13.34 ######################################################################## Parm-1 is XYZ Parm-2 is w4567 Parm-3 is 3421 Parm-4 is mclfmkf Properties file is jakjfdakohj Parm-6 is %Source_File% Parm-7 is binary Parm-8 is Parm-9 is SOURCE_DIR is mfkljfdalkj SOURCE_FILE is klnsaclkncalkn FINAL_DIR is /mail/lslk/jdslkjd/ FINAL_FILE is lkjdflkj_*.txt MFRAME is N Version (C) Copyright ************************************************* Successfully connected   I want splunk to include the ### as the first line of the event message, but I am able to get line breaker from the second line Thu 05/02/2024 - 8:06:13.34   Please let me know    
need query to remove duplicates from count stats Sample input event  email abc      xyz@email.com abc    xyz@email.com abc. test@email.com abc. test@email.com xyz xyz@email.com Expected outpu... See more...
need query to remove duplicates from count stats Sample input event  email abc      xyz@email.com abc    xyz@email.com abc. test@email.com abc. test@email.com xyz xyz@email.com Expected output  event count abc 2 xyz 1 what I am getting  event count abc 4 xyz 1
Hi everyone I updated the version of my database agent and by default, appdynamics set the name like a "default database agent", but I need customize the name for each one, but I could find out wher... See more...
Hi everyone I updated the version of my database agent and by default, appdynamics set the name like a "default database agent", but I need customize the name for each one, but I could find out where set up this configuration. Can anyone help me to know where change the database agent name? thank's
I get the error showed in the title when tying to upload a csv as  lookup. I tried the solution mentioned here:  https://community.splunk.com/t5/Splunk-Search/What-does-the-error-quot-File-has-no-li... See more...
I get the error showed in the title when tying to upload a csv as  lookup. I tried the solution mentioned here:  https://community.splunk.com/t5/Splunk-Search/What-does-the-error-quot-File-has-no-line-endings-quot-for-a/m-p/322387 but that doesn't work. Any suggestions? 
I get weekly email updates with results from weekly URA scans. After noticing that we had outdated apps we rolled out updates for three public apps, Sankey Diagram, Scalable Vector Graphics and Splun... See more...
I get weekly email updates with results from weekly URA scans. After noticing that we had outdated apps we rolled out updates for three public apps, Sankey Diagram, Scalable Vector Graphics and Splunk Dashboard Examples. In our testing environment URA is now content and all apps pass jQuery scans without issues. However, in our production environment URA scan still fails in all three apps. It does not specify which files or of there is a problem om one or all instances so I don’t know what is causing the results. I have double and triple checked the apps comparing hash values for every file both on the deployment server and on all individual test and production search heads. Everything except for the “install hash” in “meta.local” is identical in both test and production environment. Apps are all identical between cluster members in test and production environment respectively. There are not additional files present on any search head in the production environment. Why is URA still failing these apps only in the production environment? How can I identify the reason for the scan failures as I they should all pass in both environments, being identical and all. Any and all suggestions are most welcome All the best
Upgraded my HF and Deployment servers from 9.0.4 to 9.2.2.  These are Windows servers.  Then I received KVStore/MongoDB failure messages.  I've tried the KVStore migration steps.  I've tried removing... See more...
Upgraded my HF and Deployment servers from 9.0.4 to 9.2.2.  These are Windows servers.  Then I received KVStore/MongoDB failure messages.  I've tried the KVStore migration steps.  I've tried removing the .lock files from the mongo folder.  I am trying to get the MongoDB updated/upgraded to the latest version so the KVStore will stop complaining.  Any help to clear this up would be greatly appreciated.  
Hi All, Can anyone please help me on this ... I am framing a SPL query to get list of hosts with their last eventtime. SPL query:   | tstats max(_time) as latest where index=indexname by host ... See more...
Hi All, Can anyone please help me on this ... I am framing a SPL query to get list of hosts with their last eventtime. SPL query:   | tstats max(_time) as latest where index=indexname by host | convert ctime(latest) From this query, I am getting the list as expected, but with one bug. (If I have a host both in lower case & in upper case, I am getting 2 different entries) Eg:              host                            latest              HOSTNAME1               09/17/2024 15:27:49              hostname1                   08/30/2024 15:27:00              hostname2                   09/15/2024 15:27:49              HOSTNAME2               09/13/2024 15:27:49 From here, I have to get only one entry for a host along with latest time. (For hostname1, I should get 09/17/2024 15:27:49, similarly for hostname2 I should get 09/15/2024 15:27:49) I tried adding the command,  | eval host=upper(host), latest=max(latest) | dedup host But it is not considering max of "latest", and it just showing the single row for each host with random value of "latest" Can you please suggest me the better way to achieve this. thanks
Hi everyone! Is it possible to pass a parameter from search to the next "action|url" step? Like in description: $result$ if not, is it possible to somehow change this behavior by modifying this nex... See more...
Hi everyone! Is it possible to pass a parameter from search to the next "action|url" step? Like in description: $result$ if not, is it possible to somehow change this behavior by modifying this next step, if yes, then how? Thanks.