All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, various tables from a database are read by Splunk. I need to combine fields from all 3 datasources. The ID-fields contain the same value, but is rolled over after a fixed number of entries. This... See more...
Hi, various tables from a database are read by Splunk. I need to combine fields from all 3 datasources. The ID-fields contain the same value, but is rolled over after a fixed number of entries. This happens approx. every 3 month. The _time values are close together (within seconds or minutes), but they are not the same. datasource dsa _time, ID-A, field-a1, field-a2 datasource dsb _time, ID-B, field-b1, field-b2 datasource dsc _time, ID-C, field-c1, field-c2 Any suggestions on how to achive this? regards Manfred
Hello, we try to ingest the sourcetype azure:aad:user (AAD users input) and face an issue with timestamping. Other inputs are working fine, for example groups or sign-ins. Around 33k of our 105k ... See more...
Hello, we try to ingest the sourcetype azure:aad:user (AAD users input) and face an issue with timestamping. Other inputs are working fine, for example groups or sign-ins. Around 33k of our 105k users are indexed with the latest timestamp of the last input interval (86400 seconds, once a day), but 22k users are indexed with the fixed timestamp 11/28/17 9:06:37.900 AM (CET) 50k users are indexed with the fixed timestamp 3/9/18 8:01:24.400 PM (CET) We do not see any reference to these two timestamps within the Azure Active Directory, therefore we think it is a Splunk related issue. We use Splunk 8.1.6 and the Microsoft Azure Add-on 3.2.0 Do you have any idea to explain or change this behaviour?
Hi all, I have been using a subsearch in a timechart command to dynamically select the correct span. The query looks like this: | timechart [| makeresults | eval interval = "*" | `get_timespan(inter... See more...
Hi all, I have been using a subsearch in a timechart command to dynamically select the correct span. The query looks like this: | timechart [| makeresults | eval interval = "*" | `get_timespan(interval)` | eval span = "span=".timespan_from_macro | return $span] count by MYFIELD The idea behind this is as follows. We have a dashboard where we have a selector to choose between a week, month, quarter, and year to show data. Depending on this, the span of the timechart should be adjusted.  Therefore, interval is the token inserted from the dashboard and get_timespan is a search macro that yields 1w@w1,  1mon@mon,  quarter, 1y@y to timespan_from_macro. In turn, this should specify the span to use in the timechart command.  This has been working fine for the last couple of weeks, and this approach has been suggested in this forum a few times. However, due to the log4j vulnerability our admins were forced to update to 8.2.4 and now the query yields no results even though there should be. Before, we were on version 8.2.2 (not 100% certain but pretty confident). Has there something changed that I need to adjust the query or are there even better solutions for this? Or could it really be related to the update? PS: The search does not throw an error, but yields no results. If i open the inspect job window and just copy&paste the generated query it yields the correct results (since the subsearch has been executed and been replaced with the correct span=... value).
Hi, which is the best practice to ingest data from external (internet-based) data sources, when only syslog or native forwarding are available? A set of load-balanced heavy forwarders in DMZ, that wo... See more...
Hi, which is the best practice to ingest data from external (internet-based) data sources, when only syslog or native forwarding are available? A set of load-balanced heavy forwarders in DMZ, that work as relay to internal indexers? Direct channels from external networks to internal networks are not an option, due to security requirements.
Hi There,  I followed this article https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-install-the-NET-Core-Microservices-Agent-for-Windows/ta-p/33191  and instrumented one of my asp.net ... See more...
Hi There,  I followed this article https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-install-the-NET-Core-Microservices-Agent-for-Windows/ta-p/33191  and instrumented one of my asp.net core application, I see profiler logs and agent logs are getting generated when I access the app. In the AgentLog.txt, I see the below error message and I could not see any metrics in the dashboard I tried using global-account-name with accesskey as the password then I tried account name and access key combination but none of them worked.. One  Webapplication with normal asp.net worked fine and its log does not show this error using same controller with same access credentials, I see only issue with asp.net core 2022-01-18 13:56:12.4240 22660 w3wp 1 6 Warn ConfigurationChannel Could not connect to the controller/invalid response from controller, cannot get initialization information, controller host [theater202201112310492.saas.appdynamics.com], port[443], exception [AppDynamics.Controller_api.Communication.Http.HttpCommunicatorException: Failed to execute request to endpoint [https://theater202201112310492.saas.appdynamics.com/controller/instance/0/applicationConfiguration_PB_]. Unexpected response status code [Unauthorized]. at AppDynamics.Controller_api.Communication.Http.HttpClientHttpCommunicator.Send(Byte[] data, Uri relativeUri, String method, String contentType, Dictionary`2 additionalHeaders, Dictionary`2 additionalSecuredHeaders, String userAgent, Func`3 processResponse, Int32 timeout) at AppDynamics.Controller_api.Communication.Http.HttpCommunicator.Send(Byte[] data, Uri relativeUri, String method, String contentType, Action`1 processResponse, Int32 timeout) at AppDynamics.Controller_api.Communication.Http.HttpCommunicatorExtensions.Send(HttpCommunicator communicator, ProtobufPayload payload, Uri uri) at com.appdynamics.ee.rest.controller.request.AProtoBufControllerRequest.sendRequest()] 2022-01-18 13:56:12.4240 22660 w3wp 1 6 Error ConfigurationChannel Exception: Failed to execute request to endpoint [https://theater202201112310492.saas.appdynamics.com/controller/instance/0/applicationConfiguration_PB_]. Unexpected response status code [Unauthorized]. Exception: AppDynamics.Controller_api.Communication.Http.HttpCommunicatorException: Failed to execute request to endpoint [https://theater202201112310492.saas.appdynamics.com/controller/instance/0/applicationConfiguration_PB_]. Unexpected response status code [Unauthorized]. at AppDynamics.Controller_api.Communication.Http.HttpClientHttpCommunicator.Send(Byte[] data, Uri relativeUri, String method, String contentType, Dictionary`2 additionalHeaders, Dictionary`2 additionalSecuredHeaders, String userAgent, Func`3 processResponse, Int32 timeout) at AppDynamics.Controller_api.Communication.Http.HttpCommunicator.Send(Byte[] data, Uri relativeUri, String method, String contentType, Action`1 processResponse, Int32 timeout) at AppDynamics.Controller_api.Communication.Http.HttpCommunicatorExtensions.Send(HttpCommunicator communicator, ProtobufPayload payload, Uri uri) at com.appdynamics.ee.rest.controller.request.AProtoBufControllerRequest.sendRequest()
If i have n numbers of router in my index  and i want to know the current status of router if its connected or failed , if its is failed i need to show the router number of the failed router how to d... See more...
If i have n numbers of router in my index  and i want to know the current status of router if its connected or failed , if its is failed i need to show the router number of the failed router how to display the router number instead of router count , what query i should write  
Hello, Can someone please help me with a query to find who deleted the files of users (user=x, y, z) from a folder.  index=* sourcetype=* folder_name=*abc*  Thankyou
Hi everyone, Screenshot of issue:           Here are my setup with my current issue: I'm running this search in a clustered environment I'm running Splunk 8.0.5 Using search command in... See more...
Hi everyone, Screenshot of issue:           Here are my setup with my current issue: I'm running this search in a clustered environment I'm running Splunk 8.0.5 Using search command in CLI Running search command to display results of savedsearch  Attached below is the search command in CLI I used: /opt/splunk/bin/splunk search "|savedsearch TestReport" -maxout 0 -auth test:test   Additionally, below is the content of such savedsearch: index=myindex(sourcetype=sourcetype1 OR sourcetype=sourcetype2) _index_earliest="01/17/2020:22:00:00" _index_latest="01/17/2020:22:59:00" | stats count   I need the search result of CLI to display once since I'm using its content to populate another csv which I will use for another purpose.   Kindly let me know if there is something that I need to reconfigure in my environment. Thank you!   Regards, Raj
We developed a dashboard with custom layout using the enterprise dashboard beta app. There we used a simple feature to load a drop down menu item with entries from a look up file. This takes too lon... See more...
We developed a dashboard with custom layout using the enterprise dashboard beta app. There we used a simple feature to load a drop down menu item with entries from a look up file. This takes too long time to load.  When we click on the drop down menu, it takes about ~10 to 15 seconds to load the options. Is anyone aware of this issue? Any known solutions ?   Note: the same drop down loading, if done in a regular Splunk app, takes < 1 second to load.
Please help! I have a lookup table and some data in two different indexes. Please help with a search that will produce an output like the following?  I need to show "Foo Bar", which is present in ... See more...
Please help! I have a lookup table and some data in two different indexes. Please help with a search that will produce an output like the following?  I need to show "Foo Bar", which is present in the lookup, but has no values associated with the name in either index. name           id      action Tom Brady      tom     deleted Foo Bar        N/A     N/A Aaron Rodgers  aaron   added inputlookup=player.csv, column heading is name Tom Brady Foo Bar Aaron Rodgers index=a name="Tom Brady" id=tom name="Aaron Rodgers" id=aaron index=b user=tom action=deleted user=aaron action=added This is where I’m stuck. How can I also show "Foo Bar" as N/A ? index=b | join type=inner user [ | search index=a [| inputlookup player.csv | fields name ] | rename id AS user ] | table name, user, action  
I'm using a controller On Premises. I'm hoping to update because the latest version has been released. I'm having trouble not knowing the impact of the update. Specifically, the "On-Premises Platf... See more...
I'm using a controller On Premises. I'm hoping to update because the latest version has been released. I'm having trouble not knowing the impact of the update. Specifically, the "On-Premises Platform Resolved Issues" described in the following URL (*) There is only the following information and you do not know the impact of the update. - Key - Product - Summary - Version (*) https://docs.appdynamics.com/21.4/en/product-and-release-announcements/release-notes#ReleaseNotes-on-prem-resolved-issues Could you tell us about the following? - If you know the importance and impact of Resolved Issues, could you tell us? - As a solution to know the importance and impact of Resolved Issues, it is possible to include the following in Resolved Issues. Can I make this information public?  - Resolved Issues Importance  - Effects of Resolved Issues
After first time installation of cyberchef app on a search head via deployment server, the custom search command works properly. However, on subsequent use we are seeing the following error and searc... See more...
After first time installation of cyberchef app on a search head via deployment server, the custom search command works properly. However, on subsequent use we are seeing the following error and search not working. "Error loading required module Cyberchef: SyntaxError: Unexpected token import" App version 1.0.3 Can anyone please help fix this issue ?
I have a need to export dashboards to clients in the Splunk Cloud Environment. Without having access to the backend it does make it difficult to do many items and Dashboard Studio does not allow t... See more...
I have a need to export dashboards to clients in the Splunk Cloud Environment. Without having access to the backend it does make it difficult to do many items and Dashboard Studio does not allow the export of scheduled PDFs (nor does it export data very well in comparison (as its not the intent))  Someone previously asked the below but did not get a usable response, I wonder if anyone else has any insights for achieving better exports in a Splunk Cloud instance?  "I want to extract our dashboards the same way as they appear. If I do a PDF export it comes out with 1 or 2 panels on each page rather than the same as the dashboard looks where it might have multiple panels grouped together. The app 'Smart PDF Exporter for Splunk' https://splunkbase.splunk.com/app/4030/#/details is ideal but cannot be used for cloud. Does anyone have any other ideas on how to export it in the same method as this app?"
Hello, I need assistance with Splunkforwarder it Cannot create parent directory /opt/Splunkforward/etc/apps/scBaseline_LinuxVarLog. I installed this forwarder as root but server couldn't deployed the... See more...
Hello, I need assistance with Splunkforwarder it Cannot create parent directory /opt/Splunkforward/etc/apps/scBaseline_LinuxVarLog. I installed this forwarder as root but server couldn't deployed the apps such as scBaseline_LinuxVarLog, so I decided to installed it under its own users splunk, but now it doesn't have the permissions to create directory here: /opt/Splunkforward/etc/apps/scBaseline_LinuxVarLog I changed the permissions as chown -R splunk:splunk /opt/Splunkforward/etc/apps/scBaseline_LinuxVarLog it works momentarily but it change the permissions to root:root again. Universal Splunkforwarder 8.1 - on Linux machine Your assistance is appreciated it.
I'm still struggling getting some basic xml tokenization concepts working in Dashboard studio. I have a simple text box input someone types into.  from there, I want that to set a token that I use... See more...
I'm still struggling getting some basic xml tokenization concepts working in Dashboard studio. I have a simple text box input someone types into.  from there, I want that to set a token that I use in the base search.  Next thing I want to do is take a field value pair in the results and set the value in a token to use in another panel thats going to fill in a URL and grab an image so: Base ds_lrth34 datasource: Textbox -> Chain Search off ds_lrth34 that runs        .. | fields MyField       I want to do something that I would have otherwise done this way in simple XML:       <set token="myField">$result.MyField$</set>        but I havent found a structure to make this work at all so that when I create an image viz I could do something like this:     "visualizations": { "viz_TRhGkelt": { "type": "viz.img", "options": { "src": "https://myCompany.sharepoint.com/_layouts/15/userphoto.aspx?AccountName=$MyField$&Size=L" } } },     This is heavily utilized methodology in our simple xml dashboards, I would love the right way to tokenize things like this in the new workflow.
I am using RHEL 8.5 with Apache proxy and trying to get the proxy to do DOD CAC authentication. I am very new to Splunk and Apache. I am able to get the proxy to point to the Splunk server but I am ... See more...
I am using RHEL 8.5 with Apache proxy and trying to get the proxy to do DOD CAC authentication. I am very new to Splunk and Apache. I am able to get the proxy to point to the Splunk server but I am not getting the regular prompt for the CAC authentication. I am using a reserve.conf for this as the main setup file as seen below: ====web.conf SSOMode = strict remoteUser = Remote_User enableSplunkWebSSL = True trustedIP = 192.168.110.10 ===server.conf trustedIP = 192.168.110.10 ===Reserve.conf ServerName www.mcscapache.com ProxyRequests Off ProxyPreserveHost Off SSLProxyEngine on SSLProxyVerify none SSLProxyCheckPeerCN off SSLProxyCheckPeerName off SSLProxyCheckPeerExpire off #SSLVerifyClient require SSLVerifyDepth 10 # initialize the special headers to a blank value to avoid http header forgeries RequestHeader set SSL_CLIENT_S_DN "" RequestHeader set SSL_CLIENT_I_DN "" RequestHeader set SSL_SERVER_S_DN_OU "" RequestHeader set SSL_CLIENT_VERIFY "" # add all the SSL_* you need in the internal web application RequestHeader set SSL_CLIENT_S_DN "%{SSL_CLIENT_S_DN}e" RequestHeader set SSL_CLIENT_I_DN "%{SSL_CLIENT_I_DN}e" RequestHeader set SSL_SERVER_S_DN_OU "%{SSL_SERVER_S_DN_OU}e" RequestHeader set SSL_CLIENT_VERIFY "%{SSL_CLIENT_VERIFY}e" RequestHeader add X-Forwarded-Proto https RequestHeader add X-Forwarded-Port 443 RequestHeader set X-Forwarded-Proto "https" RequestHeader set Client-Cert-Subject "%{SSL_CLIENT_S_DN}s" RequestHeader set Remote_User %{Remote_User}s ProxyPass / https://www.mcscsplunk.com:8000/en-US/app/launcher/home ProxyPassReverse / https://www.mcscsplunk.com:8000/en-US/app/launcher/home #SSLCertificateChainFile /etc/pki/tls/certs/ca-bundle.crt SSLCertificateFile /etc/pki/tls/private/dodserver.crt #SSLCACertificateFile /etc/pki/tls/private/DoD_CAs.pem SSLCertificateKeyFile /etc/pki/tls/private/dodserverkey.key <Proxy *> RewriteEngine On RewriteCond %{SSL:SSL_CLIENT_S_DN_CN} ([0-9]+$) RewriteRule (.*) - [E=USER:%1] #RequestHeader set cacuser %{USER}e@mil RequestHeader set Remote_User %{Remote_User}e # SSL conf file to force users to a warning cookie before they are able to access Splunk RewriteCond %{HTTP_COOKIE} !accepted_warning=true [NC] RewriteRule ^/(de-DE|en-US|en-GB|it-IT|ja-JP|ko-KO|zh-CN|zh-TW)/.*$ warning/ [NC,L,R=302] </Proxy> <Location /> Require all granted allow from all AuthType Kerberos require valid-user Options +SymLinksIfOwnerMatch # AllowOverride All Order deny,allow Allow from 192.168.190.0/24 Deny from all </Location>   Thank you
Can a search time limit be applied differently by index rather than by role?  Currently, we have a search roll limit of 6 weeks. However, this limit defeats the purpose of summary indexing whi... See more...
Can a search time limit be applied differently by index rather than by role?  Currently, we have a search roll limit of 6 weeks. However, this limit defeats the purpose of summary indexing which are often required to examine a much longer spread. Any thoughts or work-arounds?   
My main query looks like: ...| stats min(_time) AS SESSION_START_TIME max(Source_Network_Address) AS EMP_SRC_IP... | eval empID=`my_macro($EMP_SRC_IP$, $SESSION_START_TIME $)`  My macro definition... See more...
My main query looks like: ...| stats min(_time) AS SESSION_START_TIME max(Source_Network_Address) AS EMP_SRC_IP... | eval empID=`my_macro($EMP_SRC_IP$, $SESSION_START_TIME $)`  My macro definition is: index=my_idx event.eventID=4624 event.Come_From=$ip_address$  latest=$time$ | sort - _time | head 1| table event.Who_Is_It My questions are: 1. How can I make my macro, my_macro, returns a String which is the value of event.Who_IS_It ? 2. Is the way I assign the macro returned value to param name empID is the right way?
Hey all, I am stumped and need some help, I am configuring a system stack with Splunk enterprise on it. It is relatively small, only 11 systems. I have the web interface installed with a license, for... See more...
Hey all, I am stumped and need some help, I am configuring a system stack with Splunk enterprise on it. It is relatively small, only 11 systems. I have the web interface installed with a license, forwarders and apps pushed out to systems, and port listeners open on 9997 for the forwarder to talk back to in the forwarding and receiving tab. I know there is some communication because I can see all of the systems in the forwarder management tab, however I cannot get any data into our dashboards. The only system data I can find and search is that of the server where the main instance is located. I have indexes made for all the different types of data, (linux_audit, Win_security, ETC). No data from the forwarders themselves is coming through. My only other thought is a firewall issue and that the correct port isn't open but beyond that I had no idea.  I am sorry for the ignorance, this is my first real time setting this up and the Splunk documentation isn't super helpful for troubleshooting. Thanks in advance! 
Hello, As everyone already knows the splunk commands like ./splunk status it is only possible to do it from the splunk OPT folder I saw a video where someone executes that splunk status command wit... See more...
Hello, As everyone already knows the splunk commands like ./splunk status it is only possible to do it from the splunk OPT folder I saw a video where someone executes that splunk status command without being in the Splunk OPT folder, could you tell me how to do it? Thank you