All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi - I have a Splunk UF monitoring many directories on a rsyslog (receiver) server. One of the directories populated with logs as expected. However, the input stanza had the incorrect sourcetype a... See more...
Hi - I have a Splunk UF monitoring many directories on a rsyslog (receiver) server. One of the directories populated with logs as expected. However, the input stanza had the incorrect sourcetype and the data/logs did not index. Now after removing the sourcetype, I need to reset the UF to re-monitor the log files in that single directory "only".   I do NOT want to re-index everything the UF monitors. Please advise the best way to handle this... Thank you
Hi could you please give me an advice how to edit a call to the Splunk Rest API with the following parameter: search | inputlookup mylookup.csv The goal is to use another APP, not the user default ... See more...
Hi could you please give me an advice how to edit a call to the Splunk Rest API with the following parameter: search | inputlookup mylookup.csv The goal is to use another APP, not the user default one. I tried the following but it gave me same result: | inputlookup mylookup.csv | eval app_name = $env:MyNewApp$    Thanks a lot!
Hi -  Let's say you have a scheduled query / report that runs daily (at mid-night) looking over a time range of Last 24 hours.  And you summarize the results to index=summary_index_foo. There was a... See more...
Hi -  Let's say you have a scheduled query / report that runs daily (at mid-night) looking over a time range of Last 24 hours.  And you summarize the results to index=summary_index_foo. There was a "foo" data source outage for a couple days, however you were able to backfill the data to index=foo. What is the best to re-run the query without creating a lot of duplicates.   I am pretty sure if you use "collect" that will create duplicates. But will re-scheduling a one-time clone of the report over the outage days and summarizing results create duplicates if the time range overlaps into the data (before and after the outage)? In other words, the outage time frame was not to the minute, hour, or day exactly.  When you re-schedule/re-summarize the query will that create duplicates if the same data/event exists in the summary index for that time?    Or will Splunk drop duplicates when using the summary index?    I am guessing duplicates will still be created but need a sanity check. Thank you  
We were presented with a situation where non-admin users needed access to Splunk license data from the _internal index, though to whom we can not grant admin access/access to the _internal index. Th... See more...
We were presented with a situation where non-admin users needed access to Splunk license data from the _internal index, though to whom we can not grant admin access/access to the _internal index. The solution we came up with was to generate a new index and use a scheduled report/search using collect to move only license event to this new index and give access to the new index on a role-basis appropriate for these users. This did not work as the users with roles who should have read access to this index cannot see any events, only admin can and I am struggling to figure out why. This is the basic search used to copy events: index=_internal source=*license_usage.log type=Usage | collect index="collect_license" is scheduled to run every 15 min collecting data for the past 15 min. The report is set to Display for App with read access granted to Everyone. The job runs, collect does "copy" data to the new index where admin users can search the index. However, the other roles with access to the index cannot. We cannot see, or find any indications of restrictions on the "stash" sourcetype in the documentation, online or in our environment. Users have a role-based access to this new index though for some reason are not allowed access to events in the index turning up empty results when searching. So I hit a wall here,  I see no reason why the intended users lack access to events in the new index. The only thing left I can imagine is if there is any inherited property included when "extracting" and "copying" using collect maintaining some restriction from the source index. Any information and/or suggestions that could help solve this would be greatly appreciated    
I have created a dashboard with multiple tabs each for one application. the first tab contains the health status of all the apps. I want to be able to click on the status of the application in the fi... See more...
I have created a dashboard with multiple tabs each for one application. the first tab contains the health status of all the apps. I want to be able to click on the status of the application in the first tab which can open the respective tab of that application. since i just need to open the tab i am assuming i do not need to pass any tokens but please advise. Also while i load the dashboard for the 1st time i would like to have only the first tab load. the remaining tabs load on click directly on the tab or through the visualization on first tab. I have used li to have all the tabs listed. also used tabs.js & tabs.css for displaying the apps in each tab.@niketnilay
e.g how to get sum of below in single query sum(val_2) by application sum(val_2) by val_1 Query Result(single query) column1      column2 ABC              1478 FSD               4839 A   ... See more...
e.g how to get sum of below in single query sum(val_2) by application sum(val_2) by val_1 Query Result(single query) column1      column2 ABC              1478 FSD               4839 A                    5849 B                    478 or column1      column2  column3    column4 ABC              1478                A                5849 FSD               4839               B                 478 or what ever possible in single query Please help
Hello splunkers, i need to understand the best way to forward my data in multisite indexer cluster for Disaster Recovery management: For example, we have: On Site A 1 manager node (active) 3 pee... See more...
Hello splunkers, i need to understand the best way to forward my data in multisite indexer cluster for Disaster Recovery management: For example, we have: On Site A 1 manager node (active) 3 peer nodes [IDX_1A, IDX_2A, IDX_3A ] (active) 1 search head (active) 2 Heavy Forwarder [HF_1A, HF_2A] (active) On Site B 1 manager node (stand by) 3 peer nodes [IDX_1B, IDX_2B, IDX_3B ] (active) 1 search head (stand by) 2 Heavy Forwarder [HF_1B, HF_2B] (standy By) On HF_1A and HF_2A the outputs.conf have to configure to send data to: 1) ALL site A and site B indexers (IDX_1A, IDX_2A, IDX_3A, IDX_1B, IDX_2B, IDX_3B) we suppose that HF can comunicate with all OR 2) Only site A IDX? (IDX_1A, IDX_2A, IDX_3A) OR 3) Any other way? Thanks in advance
This one may not be a hard one, but I am asking because I dont know how to explain what I am doing thus not able to actually search the community posts to see if someone has already asked the questio... See more...
This one may not be a hard one, but I am asking because I dont know how to explain what I am doing thus not able to actually search the community posts to see if someone has already asked the question, but here it goes. What I am trying to do is create a search that will go back through the history for a particular field and find the last change to that data and also display the field of when that date was when it changed.  Its not something that I want running all the time because I am sure it will take time to search through millions of logs so its just going to end up as a dashpanel that can be searched when needed. example:  Host ABC used to have IP address 1.1.1.1.  the IP address is 2.2.2.2 and I want to be able to find the time and date that host ABC changed IP address.
HI All, I have a DB querry, need a help in date filter.    | dbxquery connection="ITDW" shortnames=true query="SELECT GETDATE() as 'CurrentTime', [Incident_Number], INC.[Company], [Customer], PPL... See more...
HI All, I have a DB querry, need a help in date filter.    | dbxquery connection="ITDW" shortnames=true query="SELECT GETDATE() as 'CurrentTime', [Incident_Number], INC.[Company], [Customer], PPL.Region as 'Customer_Region', PPL.Site_Group as 'Customer_Site_Group', PPL.Site as 'Customer_Site', [Summary], [Notes], [Service], [CI], [Impact], [Urgency], [Priority], [Incident_Type], [Assigned_Support_Group], [Assigned_Support_Organization], [Status], [Assignee], [Status_Reason], [Resolution], [Reported_Date], [Responded_Date], [Closed_Date], [Last_Resolved_Date], [Submit_Date], [Last_Modified_Date], [Owner_Group] FROM [shared].[ITSM_INC_MAIN] INC LEFT OUTER JOIN [shared].[ITSM_CMDB_People_Main] PPL ON INC.Customer_ID = PPL.Person_ID WHERE (([Assigned_Support_Group] = 'Ops-WAN')) AND [Submit_Date] BETWEEN DATEADD(D,-700,GETDATE()) AND GETDATE()"   I want some data which is old , filter only data that is of 2019, but when i search for more than 300 days. its not showing the accurate data.   how can i add a specific year or between date in the code. 
I have a dashboard that has date filter. I have embedded a report to this dashboard. Here is how my report part of dashboard looks like <panel> <table> <title>My Test Report</title> <search ref=... See more...
I have a dashboard that has date filter. I have embedded a report to this dashboard. Here is how my report part of dashboard looks like <panel> <table> <title>My Test Report</title> <search ref="my_test_report"></search> </table> </panel> So how can I use the dashboard date filter with this embedded report too.
I have a Splunk Heavy Forwarder server that is a rsyslog server as well. When Splunk sees the syslog data, it sets the source type, then the index name before its sent to indexing. props.conf   [... See more...
I have a Splunk Heavy Forwarder server that is a rsyslog server as well. When Splunk sees the syslog data, it sets the source type, then the index name before its sent to indexing. props.conf   [rsyslog] TRANSFORMS-force_vmware = force_sourcetype_vmware, force_ix_vmware    transforms.conf   force_sourcetype_vmware] SOURCE_KEY = MetaData:Host REGEX = ^host::(10\.30\.31\.\d+) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::esxi [force_ix_vmware] SOURCE_KEY = MetaData:Sourcetype REGEX = ^sourcetype::(?i)esxi$ DEST_KEY = _MetaData:Index FORMAT = vmware     This works fine. I now like to remove all lines (that are from vmware) that starts with:   <134>2021..... <166>2021.....     To do so, I made a regex like this:   REGEX = ^<(134|166)>2021     I know that to remove some, I should use:   DEST_KEY = queue FORMAT = nullQueue     But I do not get it to work.  How to make sure only remove correct data from vmware only?
Hi, I have a script which can pull the service status for each of the service, I have defined it to be a common sourcetype, LINBREAK - regex pattern   ([\r\n]+)\w+\=\"\w+\"\,\w+\=\"\w+\"\,\w+\=\d... See more...
Hi, I have a script which can pull the service status for each of the service, I have defined it to be a common sourcetype, LINBREAK - regex pattern   ([\r\n]+)\w+\=\"\w+\"\,\w+\=\"\w+\"\,\w+\=\d\,\w+\=\"\w+\"     on the script it would output as below sample   service_name="XXXX",os_service="jboss",status_value=1,status="Running"     It was alright until I started monitoring microservices which breaks the above pattern on the os_service field Sample output, If you see the issue here, the os_service now has "-" in between and it varies for each of the sub services or os_service,. Is there any generic way to capture anything under os_service with a common regex so if we pass any os_service name it would handle both the normal os_service as above in example and also be used for microservices os_service.    service_name="Microservices",os_service="xx-xx-service",status_value=1,status="Running" service_name="Microservices",os_service="xxx-application-service",status_value=1,status="Running" service_name="Microservices",os_service="buxx-service",status_value=1,status="Running" service_name="Microservices",os_service="xx-cuxxxxxx-service",status_value=1,status="Running" service_name="Microservices",os_service="xxxx-organisation-service",status_value=1,status="Running" service_name="Microservices",os_service="xxxx-event-service",status_value=1,status="Running" service_name="Microservices",os_service="coxx-xxx-check-service",status_value=1,status="Running" service_name="Microservices",os_service="xxx-bxx-service",status_value=1,status="Running" service_name="Microservices",os_service="xxx-pxxx-sanXXXXX-service",status_value=1,status="Running" service_name="Microservices",os_service="xxx-core-application-service",status_value=1,status="Running" service_name="Microservices",os_service="xxx-core-cuxxxxxx-service",status_value=1,status="Running" service_name="Microservices",os_service="xxx-core-organisation-service",status_value=1,status="Running" service_name="Microservices",os_service="xxx-core-event-service",status_value=1,status="Running" service_name="Microservices",os_service="xxx-core-xxx-check-service",status_value=1,status="Running" service_name="Microservices",os_service="gateway-service",status_value=1,status="Running" service_name="Microservices",os_service="xxx-bxx-service",status_value=1,status="Running"    
Hi Splunk Community, I have run into an interesting scenario where I need to write a field extraction that will parse a specific part of WinEventLog add-on data, and return the results. This is rela... See more...
Hi Splunk Community, I have run into an interesting scenario where I need to write a field extraction that will parse a specific part of WinEventLog add-on data, and return the results. This is related to Log4j vulnerability, so it has some real value hopefully.  The issue I am running into is that the regex that I have built will match java files that contain 'log4j', but it will only extract the first instance in the body of the text that it sees vs all instances of log4j files. I believe I need a way to perform a positive lookahead (or something similar) match the results, and then continue to match results on that same event before moving on. Example Data below: Field Value Data: "C:\Something Something\Something Something Base\jre\bin\javaw.exe" -cp "C:\Something Something\Something Something Base\lib\patches.jar/;C:\Something Something\Something Something Base/classes;C:\Something Something\Something Something Base\lib/aopalliance-repackaged-2.5.0-b42.jar;C:\Something Something\Something Something Base\lib/slf4j-log4j12-1.7.5.jar;C:\Something Something\Something Something Base\lib/javax.annotation-api-1.2.jar;C:\Something Something\Something Something Base\lib/log4j-1.2-api-2.15.0.jar";C:\Something Something\Something Something Base//log/ff3ad640-9eb4-11eb-a0b2-1de605f6535b\mini_probe\23468" 101_input.txt First Extraction Query Attempt: Query - | rex field=Process_Command_Line "(?P<hasLog4>(?:([\/log4j]{6}.*?(?=;))))" Result - /log4j-1.2-api-2.15.0.jar The problem with the above extraction is that while it will match 'log4j' files, it will only match the first occurrence of it in the field value above and then move on the next event. I need it to essentially read through the entire string and extract all instances of the matched regex before moving to the next event. Also as you can see it can miss certain types of 'log4j' files, so I will need to clean up the regex anyways to fix that. Second Extraction Query Attempt: Query - | rex field=Process_Command_Line "(?P<test>C:(.*?)(?=jar|exe))" Result - C:\Something Something\Something Something Base\jre\bin\javaw The problem with this query is that it matches immediately with the first result in the field value and then moves on to the next event and never gets to where the 'log4j' file exists in the string. 
Hello, I'm working in Splunk enterprise with the search queries. I use a Website monitoring app for my website. I run a search to send my alerts where the website is not responding and is working ... See more...
Hello, I'm working in Splunk enterprise with the search queries. I use a Website monitoring app for my website. I run a search to send my alerts where the website is not responding and is working fine. How to run a query to send my alerts for that website it was down before 5 minutes now is ok? I would greatly appreciate your help. Br. ------------------------------------------------------------------------------------------------------------------------------- My search who look for errors:   
Hi at all, I have to create a Technical Add-On to integrate Qumulo Audit logs in Enterprise Security. I found that there's an archived app but it didn't contain any useful props. So I tried to ... See more...
Hi at all, I have to create a Technical Add-On to integrate Qumulo Audit logs in Enterprise Security. I found that there's an archived app but it didn't contain any useful props. So I tried to make by myself the CIM 4.x normalization. Is there anyone that encountered and solved this problem or can give me some hint? Ciao. Giuseppe
Hello.   I have two, possibly related, problems with my three node SHC (version 8.2.2). One or both may stem from using the Deployer to push out changes to app.conf for the default apps (I was tryi... See more...
Hello.   I have two, possibly related, problems with my three node SHC (version 8.2.2). One or both may stem from using the Deployer to push out changes to app.conf for the default apps (I was trying to disable checks for updates). 1. On the DMC, each SHC node reports file differences in app.conf for default apps.  Also some files are listed as missing for splunk_essentials_8_2. I tried correcting this by reversing the work undertaken with the Deployer. Without sucesss. I then decided to make the same changes on each node manually. 2. The SHC nodes report: [date] ERROR ConfReplicationThread [13247 ConfReplicationThread] - Error pulling configurations from captain=https://shc_1:8089, consecutiveErrors=74 msg="Application does not exist: 504df959a582d73": Search head cluster member (https://shc_2:8089) is having problems pulling configurations from the search head cluster captain (https://shc_1:8089). Changes from the other memers are not replicating to this member, and changes on this member are not replicating to other members. Consider performing a destructive configuration resync on this search head cluster member. These messages are stopped with: bin/splunk resync shcluster-replicated-config However, the problem returns if the SHC nodes are restarted. I would be grateful for your help in fixing these problems.
When navigating to Authentication Methods, the banner fails to load the javascript and the "LDAP settings" link is not displayed (nor any of Messages, Settings, Activity, Help on the banner) when usi... See more...
When navigating to Authentication Methods, the banner fails to load the javascript and the "LDAP settings" link is not displayed (nor any of Messages, Settings, Activity, Help on the banner) when using Chrome. When I open up Chrome console, I get the following errors: common.min.js:33956 Uncaught SyntaxError: Invalid or unexpected token i18ncatalog?autoload=1&version=%40281E8CE18DE93D7582107EA51FC4922C2368567DA901CA15529AA32D442DCF54:1 Uncaught ReferenceError: i18n_register is not defined at i18ncatalog?autoload=1&version=%40281E8CE18DE93D7582107EA51FC4922C2368567DA901CA15529AA32D442DCF54:1 (anonymous) @ i18ncatalog?autoload=1&version=%40281E8CE18DE93D7582107EA51FC4922C2368567DA901CA15529AA32D442DCF54:1 authoverview:273 Uncaught ReferenceError: $ is not defined at authoverview:273 (anonymous) @ authoverview:273 authoverview:537 Uncaught ReferenceError: $ is not defined at authoverview:537 (anonymous) @ authoverview:537 modules-c0c0a7f0612c552bdcc203c94d947b0fa5bcf748.min.js:511 ReferenceError: i18n_register is not defined at modules-c0c0a7f0612c552bdcc203c94d947b0fa5bcf748.min.js:3 modules-c0c0a7f0612c552bdcc203c94d947b0fa5bcf748.min.js:511 Uncaught ReferenceError: $ is not defined at modules-c0c0a7f0612c552bdcc203c94d947b0fa5bcf748.min.js:511 (anonymous) @ modules-c0c0a7f0612c552bdcc203c94d947b0fa5bcf748.min.js:511 authoverview:612 Uncaught TypeError: Cannot set properties of undefined (setting 'loadParams') at authoverview:612 (anonymous) @ authoverview:612 authoverview:626 Uncaught TypeError: Cannot read properties of undefined (reading 'System') at authoverview:626 (anonymous) @ authoverview:626 init.js:3 Uncaught ReferenceError: i18n_register is not defined at init.js:3 These all seem to point to issues with i18n-register -- but I'm not sure exactly what has gone on here that I would be getting these errors. It's being used as a search head. Version is 8.1.4. Thanks!!
Hi at all, I noted a strange thing: in a splunk 8.2.2 with ES 6.6.2, the customer scheduled some daily reports with a time period of 24 hours and I found that the dispatch.ttl for these reports has... See more...
Hi at all, I noted a strange thing: in a splunk 8.2.2 with ES 6.6.2, the customer scheduled some daily reports with a time period of 24 hours and I found that the dispatch.ttl for these reports has the default value of "2p", that should mean 2 days. But The customer also found that the search results are maintained on the Splunk server for around 30 days. Can anyone help me to understand the reasons of this behaviour and where to find the problem? and how to reduce this disk space occupation? Ciao. Giuseppe
is Splunk Security Operations Suite available on-perm or in the cloud? or both?
The automatic color of area chart is coming grey. I want it to change to purple color as per the organization's theme.  Kindly help me on how to do it.