All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all, I'm am new to Splunk and installed the free Enterprise version to start learning to expand my skill set. I am able to install Splunk locally and monitor files on the computer it is instal... See more...
Hello all, I'm am new to Splunk and installed the free Enterprise version to start learning to expand my skill set. I am able to install Splunk locally and monitor files on the computer it is installed on. However I am now wanting to try to monitor a remote computer. I have set up a test VM and was going to install the Universal Forwarder when it asked me for my Receiving Indexer. Obviously I cannot input the 127.0.0.1 for the IP, so I tried changing the IP where the Splunk server is running. Per the Splunk documentation, I changed the mgmtHostPort line in the web.conf from 127.0.0.1:8089 to 10.xx.xx.xx:8089. I also added the SPLUNK_BINDIP=10.xx.xx.xx to the splunk-launch.conf file. After doing this, I tried to restart Splunk and it timed out due with a entry in the log, "Could not bind to ip 10.xx.xx.xx port 8089". Ok - so I reverted all my changes to their default configuration and now when I try to log into Splunk, I get "500 Internal Server Error". Everything is as it was when it was first installed and I could log in, and I've also tried 3-4 times restarting the Splunk service on my PC. This is a Windows installation p.s. Any ideas? This happened last week and the only thing I could do to fix it was uninstall and reinstall Splunk. Is that the only fix for when Splunk acts up? Thanks!
Hello,  I would like change bare host name to host name with a domain name. According to all articles I have changed the following configuration files using CLI and manual methods:  ./splunk set se... See more...
Hello,  I would like change bare host name to host name with a domain name. According to all articles I have changed the following configuration files using CLI and manual methods:  ./splunk set servername nazwahosta.domena.koncowka, ./splunk set default-hostname and added to $Splunk_home/etc/system/local/deploymentclient.conf [deployment-client] clientName = host.domain.name Afterwards - instance name and client name changed into version with domain name but host name didn't. I am using deployment server and found that someone had a similar problem (but with no solution): Event from add-on Splunk app Windows source withou... - Splunk Community I bet that the problem lies somewhere in Splunk_TA_windows app props or conf file but can't find where exactly.  Does anybody know where is the problem? Have a nice day!
KPI GLOBAL LOCAL random1 random_data random_data random2 random_data random_data random3 random_data random_data I have a dashboard like the above i want to design a condi... See more...
KPI GLOBAL LOCAL random1 random_data random_data random2 random_data random_data random3 random_data random_data I have a dashboard like the above i want to design a conditional drilldown where : when someone clicks on any value(random_data) in the global column they must be redirected to a different page in a new tab when they click on any value(random_data) in the local column they must be redirected to a different page in a new tab is there any way to achieve this?
Hi Team,   Do we have IntelliSense editor support in Phantom Playbook editor in the browser, OR can we integrate existing phantom instance into VSCode to develop playbook python code using intellig... See more...
Hi Team,   Do we have IntelliSense editor support in Phantom Playbook editor in the browser, OR can we integrate existing phantom instance into VSCode to develop playbook python code using intelligent suggestions. and test/execute it in the phantom debugger or vscode debugger. 
While most Warn and Errors show up on the Job dropdown (1) some are also displayed in an area right below the search bar (2). Looking at HTML this placeholder is named search-searchflashmessages ... See more...
While most Warn and Errors show up on the Job dropdown (1) some are also displayed in an area right below the search bar (2). Looking at HTML this placeholder is named search-searchflashmessages What's the name for this area ( to discuss this with Support ) ? Is it possible to config which messages to show/not show in here ? Looking at documentation this area is not detailed: https://docs.splunk.com/Documentation/Splunk/8.2.4/Search/WhatsinSplunkSearch
hi I must add an html link in my dashboard which is an http url But in this url there is the character & so when I want to add it, splunk says "invalid character entity" how to avoid this please?
Hi All I have a sourcetype in which we have some events with a keyword like asdf. In some events it comes in between and some events at the end. I need to forward all these logs to another sourcety... See more...
Hi All I have a sourcetype in which we have some events with a keyword like asdf. In some events it comes in between and some events at the end. I need to forward all these logs to another sourcetype with props and transforms. [generic_sourcetype_routing_asdf] REGEX =  DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::asdf_logs props.conf [current_sourcetype] TRANSFORMS-sourcetype_routing = generic_sourcetype_routing_asdf In the REGEX part I would like to know if only keeping asdf or *asdf * would work. I can't put the regex for complete log format since there are multiple formats. So I need to inform splunk to pass any event with asdf anywhere in it should be forwarded to the new one. Please suggest. Thanks Maria Arokiaraj  
I have this weird issue where the same exact search, run for a same exact period returns different number of events each time it is run. Thus, rendering all attempts for accurate reporting obsolete.... See more...
I have this weird issue where the same exact search, run for a same exact period returns different number of events each time it is run. Thus, rendering all attempts for accurate reporting obsolete. It doesn't matter the type of search, for instance, if it has some statistics or it's just plain search - same searches return different results. We've checked all the usual stuff - event sampling is turned off, indexing time is OK and it's not lagging, so no skewing of the results can come from this. Searches are run directly against indexes, no data models are involved and search logs for the searches are identical for the runs compared to each other. What we discovered for sure is, that this issue affects only indexes that are stored in an S3 Storage. Locally kept indexes are fine and do not have this issue. The S3 storage was tested, it is configured correctly, there are no network disruptions, there are no errors in the logs concerning it, there's nothing that could hint a problem. Yet, the problem remains. Any idea what may be causing this? Attaching a screenshot just for visualization, and here's the search for which it was made:     index="qualys" sourcetype="qualys:hostDetection" PATCHABLE="YES" NETBIOS="*"​      
Hello there, I want to make a top 10 of applications based on top 10 of categories. Here is an example: Category Nb of alert / category Application Nb of alert (by app for this category) ... See more...
Hello there, I want to make a top 10 of applications based on top 10 of categories. Here is an example: Category Nb of alert / category Application Nb of alert (by app for this category) Cat1 8000 App1 1000   8000 App2 100   8000 App3 10 Cat2 5000 App1 10000   5000 App2 688 Cat3 300 App4 4560 So I know how to get the top 10 categories but from that I don't know how to get the top 10 applications for each category found previously. Here is what i've done so far:  (note that the 2nd column in my example doesn't exist in my query, it's just to make the example clearer)       index=my_index action=block [search index=my_index action=block | top category | table category] | stats count by category, app | stats values(app) AS apps, values(count) AS total by category       It gives me the 10 categories but they are sorted by alphabetic order instead of by number of block action and I have more than 10 applications on the second column, not sorted. Does anyone has a solution for that? It'd be lovely. Thanks in advance.
Hi, various tables from a database are read by Splunk. I need to combine fields from all 3 datasources. The ID-fields contain the same value, but is rolled over after a fixed number of entries. This... See more...
Hi, various tables from a database are read by Splunk. I need to combine fields from all 3 datasources. The ID-fields contain the same value, but is rolled over after a fixed number of entries. This happens approx. every 3 month. The _time values are close together (within seconds or minutes), but they are not the same. datasource dsa _time, ID-A, field-a1, field-a2 datasource dsb _time, ID-B, field-b1, field-b2 datasource dsc _time, ID-C, field-c1, field-c2 Any suggestions on how to achive this? regards Manfred
Hello, we try to ingest the sourcetype azure:aad:user (AAD users input) and face an issue with timestamping. Other inputs are working fine, for example groups or sign-ins. Around 33k of our 105k ... See more...
Hello, we try to ingest the sourcetype azure:aad:user (AAD users input) and face an issue with timestamping. Other inputs are working fine, for example groups or sign-ins. Around 33k of our 105k users are indexed with the latest timestamp of the last input interval (86400 seconds, once a day), but 22k users are indexed with the fixed timestamp 11/28/17 9:06:37.900 AM (CET) 50k users are indexed with the fixed timestamp 3/9/18 8:01:24.400 PM (CET) We do not see any reference to these two timestamps within the Azure Active Directory, therefore we think it is a Splunk related issue. We use Splunk 8.1.6 and the Microsoft Azure Add-on 3.2.0 Do you have any idea to explain or change this behaviour?
Hi all, I have been using a subsearch in a timechart command to dynamically select the correct span. The query looks like this: | timechart [| makeresults | eval interval = "*" | `get_timespan(inter... See more...
Hi all, I have been using a subsearch in a timechart command to dynamically select the correct span. The query looks like this: | timechart [| makeresults | eval interval = "*" | `get_timespan(interval)` | eval span = "span=".timespan_from_macro | return $span] count by MYFIELD The idea behind this is as follows. We have a dashboard where we have a selector to choose between a week, month, quarter, and year to show data. Depending on this, the span of the timechart should be adjusted.  Therefore, interval is the token inserted from the dashboard and get_timespan is a search macro that yields 1w@w1,  1mon@mon,  quarter, 1y@y to timespan_from_macro. In turn, this should specify the span to use in the timechart command.  This has been working fine for the last couple of weeks, and this approach has been suggested in this forum a few times. However, due to the log4j vulnerability our admins were forced to update to 8.2.4 and now the query yields no results even though there should be. Before, we were on version 8.2.2 (not 100% certain but pretty confident). Has there something changed that I need to adjust the query or are there even better solutions for this? Or could it really be related to the update? PS: The search does not throw an error, but yields no results. If i open the inspect job window and just copy&paste the generated query it yields the correct results (since the subsearch has been executed and been replaced with the correct span=... value).
Hi, which is the best practice to ingest data from external (internet-based) data sources, when only syslog or native forwarding are available? A set of load-balanced heavy forwarders in DMZ, that wo... See more...
Hi, which is the best practice to ingest data from external (internet-based) data sources, when only syslog or native forwarding are available? A set of load-balanced heavy forwarders in DMZ, that work as relay to internal indexers? Direct channels from external networks to internal networks are not an option, due to security requirements.
Hi There,  I followed this article https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-install-the-NET-Core-Microservices-Agent-for-Windows/ta-p/33191  and instrumented one of my asp.net ... See more...
Hi There,  I followed this article https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-install-the-NET-Core-Microservices-Agent-for-Windows/ta-p/33191  and instrumented one of my asp.net core application, I see profiler logs and agent logs are getting generated when I access the app. In the AgentLog.txt, I see the below error message and I could not see any metrics in the dashboard I tried using global-account-name with accesskey as the password then I tried account name and access key combination but none of them worked.. One  Webapplication with normal asp.net worked fine and its log does not show this error using same controller with same access credentials, I see only issue with asp.net core 2022-01-18 13:56:12.4240 22660 w3wp 1 6 Warn ConfigurationChannel Could not connect to the controller/invalid response from controller, cannot get initialization information, controller host [theater202201112310492.saas.appdynamics.com], port[443], exception [AppDynamics.Controller_api.Communication.Http.HttpCommunicatorException: Failed to execute request to endpoint [https://theater202201112310492.saas.appdynamics.com/controller/instance/0/applicationConfiguration_PB_]. Unexpected response status code [Unauthorized]. at AppDynamics.Controller_api.Communication.Http.HttpClientHttpCommunicator.Send(Byte[] data, Uri relativeUri, String method, String contentType, Dictionary`2 additionalHeaders, Dictionary`2 additionalSecuredHeaders, String userAgent, Func`3 processResponse, Int32 timeout) at AppDynamics.Controller_api.Communication.Http.HttpCommunicator.Send(Byte[] data, Uri relativeUri, String method, String contentType, Action`1 processResponse, Int32 timeout) at AppDynamics.Controller_api.Communication.Http.HttpCommunicatorExtensions.Send(HttpCommunicator communicator, ProtobufPayload payload, Uri uri) at com.appdynamics.ee.rest.controller.request.AProtoBufControllerRequest.sendRequest()] 2022-01-18 13:56:12.4240 22660 w3wp 1 6 Error ConfigurationChannel Exception: Failed to execute request to endpoint [https://theater202201112310492.saas.appdynamics.com/controller/instance/0/applicationConfiguration_PB_]. Unexpected response status code [Unauthorized]. Exception: AppDynamics.Controller_api.Communication.Http.HttpCommunicatorException: Failed to execute request to endpoint [https://theater202201112310492.saas.appdynamics.com/controller/instance/0/applicationConfiguration_PB_]. Unexpected response status code [Unauthorized]. at AppDynamics.Controller_api.Communication.Http.HttpClientHttpCommunicator.Send(Byte[] data, Uri relativeUri, String method, String contentType, Dictionary`2 additionalHeaders, Dictionary`2 additionalSecuredHeaders, String userAgent, Func`3 processResponse, Int32 timeout) at AppDynamics.Controller_api.Communication.Http.HttpCommunicator.Send(Byte[] data, Uri relativeUri, String method, String contentType, Action`1 processResponse, Int32 timeout) at AppDynamics.Controller_api.Communication.Http.HttpCommunicatorExtensions.Send(HttpCommunicator communicator, ProtobufPayload payload, Uri uri) at com.appdynamics.ee.rest.controller.request.AProtoBufControllerRequest.sendRequest()
If i have n numbers of router in my index  and i want to know the current status of router if its connected or failed , if its is failed i need to show the router number of the failed router how to d... See more...
If i have n numbers of router in my index  and i want to know the current status of router if its connected or failed , if its is failed i need to show the router number of the failed router how to display the router number instead of router count , what query i should write  
Hello, Can someone please help me with a query to find who deleted the files of users (user=x, y, z) from a folder.  index=* sourcetype=* folder_name=*abc*  Thankyou
Hi everyone, Screenshot of issue:           Here are my setup with my current issue: I'm running this search in a clustered environment I'm running Splunk 8.0.5 Using search command in... See more...
Hi everyone, Screenshot of issue:           Here are my setup with my current issue: I'm running this search in a clustered environment I'm running Splunk 8.0.5 Using search command in CLI Running search command to display results of savedsearch  Attached below is the search command in CLI I used: /opt/splunk/bin/splunk search "|savedsearch TestReport" -maxout 0 -auth test:test   Additionally, below is the content of such savedsearch: index=myindex(sourcetype=sourcetype1 OR sourcetype=sourcetype2) _index_earliest="01/17/2020:22:00:00" _index_latest="01/17/2020:22:59:00" | stats count   I need the search result of CLI to display once since I'm using its content to populate another csv which I will use for another purpose.   Kindly let me know if there is something that I need to reconfigure in my environment. Thank you!   Regards, Raj
We developed a dashboard with custom layout using the enterprise dashboard beta app. There we used a simple feature to load a drop down menu item with entries from a look up file. This takes too lon... See more...
We developed a dashboard with custom layout using the enterprise dashboard beta app. There we used a simple feature to load a drop down menu item with entries from a look up file. This takes too long time to load.  When we click on the drop down menu, it takes about ~10 to 15 seconds to load the options. Is anyone aware of this issue? Any known solutions ?   Note: the same drop down loading, if done in a regular Splunk app, takes < 1 second to load.
Please help! I have a lookup table and some data in two different indexes. Please help with a search that will produce an output like the following?  I need to show "Foo Bar", which is present in ... See more...
Please help! I have a lookup table and some data in two different indexes. Please help with a search that will produce an output like the following?  I need to show "Foo Bar", which is present in the lookup, but has no values associated with the name in either index. name           id      action Tom Brady      tom     deleted Foo Bar        N/A     N/A Aaron Rodgers  aaron   added inputlookup=player.csv, column heading is name Tom Brady Foo Bar Aaron Rodgers index=a name="Tom Brady" id=tom name="Aaron Rodgers" id=aaron index=b user=tom action=deleted user=aaron action=added This is where I’m stuck. How can I also show "Foo Bar" as N/A ? index=b | join type=inner user [ | search index=a [| inputlookup player.csv | fields name ] | rename id AS user ] | table name, user, action  
I'm using a controller On Premises. I'm hoping to update because the latest version has been released. I'm having trouble not knowing the impact of the update. Specifically, the "On-Premises Platf... See more...
I'm using a controller On Premises. I'm hoping to update because the latest version has been released. I'm having trouble not knowing the impact of the update. Specifically, the "On-Premises Platform Resolved Issues" described in the following URL (*) There is only the following information and you do not know the impact of the update. - Key - Product - Summary - Version (*) https://docs.appdynamics.com/21.4/en/product-and-release-announcements/release-notes#ReleaseNotes-on-prem-resolved-issues Could you tell us about the following? - If you know the importance and impact of Resolved Issues, could you tell us? - As a solution to know the importance and impact of Resolved Issues, it is possible to include the following in Resolved Issues. Can I make this information public?  - Resolved Issues Importance  - Effects of Resolved Issues