All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I currently have the DB Connect plugin installed to receive the logs from an aurora database. To date everything works without problem but my client tells me that he needs to go from versio... See more...
Hello, I currently have the DB Connect plugin installed to receive the logs from an aurora database. To date everything works without problem but my client tells me that he needs to go from version 11.6 to version 11.5 I would like to know if I should do something or the fact that it is already working with the current version implies that with a higher version it should not affect anything?   https://docs.splunk.com/Documentation/DBX/3.8.0/JDBCPostgres/About  
Hello, I am currently receiving ADAudit Plus logs but I have no idea what use cases I can draw from this source. I also do not see that there is an APP with dashboards that help me Any suggestion?
Hello,   I am performing Splunk UF installation of version 8.0.5 and getting following error logged in error logs: ==> splunkd.log <== 08-23-2022 10:00:29.018 -0400 WARN TcpOutputProc - Pipel... See more...
Hello,   I am performing Splunk UF installation of version 8.0.5 and getting following error logged in error logs: ==> splunkd.log <== 08-23-2022 10:00:29.018 -0400 WARN TcpOutputProc - Pipeline data does not have indexKey. [_conf] = |||\n 08-23-2022 10:00:29.018 -0400 WARN TcpOutputProc - The event is missing source information. Event : no raw data I am not sure what is this error means. I can see that sources defined in config files are sending logs to Splunk. Could someone suggest how I can fix these errors to make sure no data lose there?
Hi Team, I'm trying to create getting response time from the below logs by using Trace ID( Or any unique value) as my logs don't have any specific URL. http-nio-8080-exec-8,WARN,com.xxx.product.s... See more...
Hi Team, I'm trying to create getting response time from the below logs by using Trace ID( Or any unique value) as my logs don't have any specific URL. http-nio-8080-exec-8,WARN,com.xxx.product.stoc.jpa.graph.AgreementProcessorServiceImpl, CHANNEL_ID : UI, RUN_ID : F3E51C72B62AC15C4E3FF2458A30C88F, TRACE_ID : 7uITsJ7CQ7MbWZZQZ9Ntz3, COLLATE_USER_ID : mashetta, EXTERNAL_USER_ID : _]  dt.trace_sampled: true, dt.trace_id: ed71da8c7bedadc2c9c568c04d91eafe, dt.span_id: feb179cfd106945fFacility 5766 has no Exposures, which are needed for a LGD calculation scheduling-1,INFO,com.xxxeventbus.sdk.listenerimpl.service.RetryCron, CHANNEL_ID : , RUN_ID : , TRACE_ID : , COLLATE_USER_ID : , EXTERNAL_USER_ID : ] Deleted 6 Completed or Failed received events. elastic-595,WARN,com.xxx.product.stoc.jpa.graph.AgreementProcessorServiceImpl, CHANNEL_ID : , RUN_ID : , TRACE_ID : , COLLATE_USER_ID : , EXTERNAL_USER_ID : ] All the Exposures have no Start Date and all the LED/Exposure links have no Rank EntityChangeExecutor_39,WARN,org.elasticsearch.client.RestClient, CHANNEL_ID : , RUN_ID : , TRACE_ID : , COLLATE_USER_ID : , EXTERNAL_USER_ID : ]  dt.trace_sampled: true, dt.trace_id: 73a4a5de4e28679b9c9330c852d9cc59, dt.span_id: 229f2f590836fc19request [POST http://<myurl:9200/] returned 2 warnings: [299 Elasticsearch-7.17.2-de7261de50d90919ae53b0eff9413fd7e5307301 "Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See <HTML File> to enable security."],[299 Elasticsearch-7.17.2-de7261de50d90919ae53b0eff9413fd7e5307301 "[ignore_throttled] parameter is deprecated because frozen indices have been deprecated. Consider cold or frozen tiers in place of frozen indices."] elastic-602,INFO,com.xxx.product.stoc.jpa.service.eligibility.EligibilityServiceImpl, CHANNEL_ID : , RUN_ID : , TRACE_ID : , COLLATE_USER_ID : , EXTERNAL_USER_ID : ] Calculated eligibility of LED 6209 as 0 with result id 1885. elastic-602,INFO,com.xxx.product.stoc.jpa.service.eligibility.EligibilityServiceImpl, CHANNEL_ID : , RUN_ID : , TRACE_ID : , COLLATE_USER_ID : , EXTERNAL_USER_ID : ] Checking eligibility of LED 6209... INFO,com.xxx.product.stoc.jpa.service.eligibility.EligibilityServiceImpl, CHANNEL_ID : , RUN_ID : , TRACE_ID : , COLLATE_USER_ID : , EXTERNAL_USER_ID : ] Calculated eligibility of LED 6235 as 0 with result id 1883. Your help is much appreciated
I have a field value like this that I want to exclude.   [22minfo[3: host.console[0]   The searches I can think of either don't do anything or return an error. Note, I am trying to speed... See more...
I have a field value like this that I want to exclude.   [22minfo[3: host.console[0]   The searches I can think of either don't do anything or return an error. Note, I am trying to speed up a search so I do not want to use regex.  Searches I tried:     message != [* message != "["* message != "[*" message != '[*' message != '['*    
We have some servers that are deployed in AWS and we want to monitor some files that are on them.  Typically, I'd go with the UF, but in this case our indexers only have private IPs.  We do have some... See more...
We have some servers that are deployed in AWS and we want to monitor some files that are on them.  Typically, I'd go with the UF, but in this case our indexers only have private IPs.  We do have some Heavy Forwarders that can be publicly addressed.  We have only used that for HEC though.  The Heavy Forwarders do have receiving set up on port 9997, but wouldn't that inde the data locally on the servers?   Have any of you had a similar issue?   We have a clustered on prem enviroment BTW.
Hi everyone, I'm looking for a solution here while playing around with the app builder on SOAR, and I could get the asset interface work fine and from the code I can get the values from there, but ... See more...
Hi everyone, I'm looking for a solution here while playing around with the app builder on SOAR, and I could get the asset interface work fine and from the code I can get the values from there, but the password type returns as an encrypted string instead (as the field is a password field). How can I decrypt it so the code can use that value in runtime accordingly?
We had the TA-mailclient v1.5.5 installed on our IDM. However when trying to configure the data input - Data inputs -> Mail server - that option was not available. After communicating with Splunk Sup... See more...
We had the TA-mailclient v1.5.5 installed on our IDM. However when trying to configure the data input - Data inputs -> Mail server - that option was not available. After communicating with Splunk Support it became clear that the option on the Splunk Cloud IDM is not available (not visible) for the role sc_admin. Any ideas / suggestions on how to solve this issue or is it a known bug or is it by design and is the TA not intended to run on the IDM
Hello guys, if we add new indexer to existing cluster of 3 indexers with RF=3 and SF=3, how will be spread primary and replicated buckets? Will 4th indexer receive replicated buckets too? Thank... See more...
Hello guys, if we add new indexer to existing cluster of 3 indexers with RF=3 and SF=3, how will be spread primary and replicated buckets? Will 4th indexer receive replicated buckets too? Thanks.  
I am trying to remove duplicate from a field result: index=tenable* sourcetype="*" severity_description="*" | table severity_description ip | stats count by severity_description Results:  Sever... See more...
I am trying to remove duplicate from a field result: index=tenable* sourcetype="*" severity_description="*" | table severity_description ip | stats count by severity_description Results:  Severity_description Count Critical Severity    =       518 High Severity.        =.      46837 Medium Severity. =      7550 Low Severity.        =.       1460 Informative.           =.       275192 Inside each of severity_description row  there are duplicates i know that by running: index=tenable* sourcetype="*" severity_description="Critical Severity" | table ip riskFactor | stats dc(ip) AS ip |rename ip as Critical | addcoltotals | stats sum(Critical) as Critical Results:  critical =128 I am trying to run the first search and remove the duplicates automatically from from each row
how can i do a report of AD users logged in multiple pc at the same time? im trying to take a list of any user that has logged (event 4624) in more than one pc.
Hello Splunkers I am trying to execute a SQL Query, however is it throwing  "com.microsoft.sqlserver.jdbc.SQLServerException: The query has timed out." error. I had also increased the timeout... See more...
Hello Splunkers I am trying to execute a SQL Query, however is it throwing  "com.microsoft.sqlserver.jdbc.SQLServerException: The query has timed out." error. I had also increased the timeout windows still no luck   Can anyone please assist how to resolve this issue  
Hi, We are in a situation in which the client doesn't use Microsoft teams. Hence we need a way to integrate AppDynamics with google.  I tried through google space, created a webhook, copied the URL... See more...
Hi, We are in a situation in which the client doesn't use Microsoft teams. Hence we need a way to integrate AppDynamics with google.  I tried through google space, created a webhook, copied the URL generated, and pasted it in under RAW URL under HTTP request template on the AppDynamics controller. Configurations are mentioned below:   Request URL -              Method - POST            Raw URL - https://chat.googleapis.com/v1/spaces/AAAAnhjhh-g/messages?key=AIzaSyDdI0hCZtE6vySjMm-WEfRq3CPzqKqqsHI&token=XQu1LPYc4W2fpdWL3RJCwE6DtRmw_ZcXkKpv88TP3mY%3D          URL encoding - UTF-8          Payload -            MIME type - application/json           Payload encoding - UTF-8           Payload - { 'text' : '$latestEvent.severity' }   Using the above configurations, when I am trying to run the test action, I am not getting any message on my Google space.   Where am I going wrong? Please assist me   Thankyou in advance.          
Hey all,  When I use Idapsearch, I am receiving the following error? External search command 'ldapsearch' returned error code 1. Script output = "error_message=Invalid credentials for the us... See more...
Hey all,  When I use Idapsearch, I am receiving the following error? External search command 'ldapsearch' returned error code 1. Script output = "error_message=Invalid credentials for the user with binddn="americas\servicesiem". Please correct and test your SA-ldapsearch credentials in the context of domain="default" for below query: | ldapsearch domain=default search="(employeeID=1344541)"  | eval ExchangeServer= "On-Premise Exchange - "+replace(replace(homeMDB, ",.+", ""),"CN=","")  | eval Location= l+" "+st  | eval MailboxLocation=if(isnull(ExchangeServer),"O365 Online", ExchangeServer)  | table employeeID, dellEmployeeStatus, accountExpires, givenName, sn, displayName, mail, extensionAttribute14, smtporsip, department, title,Location, employeeType, sAMAccountName, MailboxLocation | rename dellEmployeeStatus AS Status, accountExpires AS "Account Expires" , employeeID AS "Badge ID", sn AS LastName, givenName AS FirstName, displayName AS "Display Name", department AS Department, title AS "Job Title", sAMAccountName AS NTID, mail AS "Primary Email", extensionAttribute14 AS "Secondary Email", MailboxLocation AS "Mailbox Location" employeeType AS Company sAMAccountName AS NTID | transpose | rename column as "User Info", "row 1" as "Value" | appendpipe [stats count | table Remark] Can you please help? Many thanks!
In Splunk there exist a delete command. Any admin in Splunk can give themself the capability to use this command. In theory, if a single admin user in our Splunk environment is compromised, the attac... See more...
In Splunk there exist a delete command. Any admin in Splunk can give themself the capability to use this command. In theory, if a single admin user in our Splunk environment is compromised, the attacker can delete all data from the Splunk indexers. I know that the data is not actually deleted from disk when using the delete command, but still it is for all practical purposes deleted. Is there any way to securely disable the delete command/capability in Splunk, so that not even administrators can get access to it? Preferably we want to disable the command on the indexer layer, so that even if the OS on the server hosting the search head is compromised the command cannot be used. Alternatively, if the command can be disabled on the search head in a way that it cannot be re-enabled through the web interface, that is better than nothing.
I want to create a chart that show all the services being executed and the percentage of cpu used. I tried this after reading the documentation but it doesn't work.       index=perfmon Proc... See more...
I want to create a chart that show all the services being executed and the percentage of cpu used. I tried this after reading the documentation but it doesn't work.       index=perfmon ProcessName="*" | chart count(cpu_load_percent) over ProcessName        
Hi All, How to Monitor Ping Status of Linux and Windows Servers in Splunk Enterprises.? Is there any Splunk supported Addon or script available ? Kindly help me on this.  Thanks and Regards, ... See more...
Hi All, How to Monitor Ping Status of Linux and Windows Servers in Splunk Enterprises.? Is there any Splunk supported Addon or script available ? Kindly help me on this.  Thanks and Regards, Madhu M S 
i installed universal forwarder 4 machine this event log is getting my pc i want to compare my event log and universal forwarder ip address as where i receive so i use to lookup index="_internal" to... See more...
i installed universal forwarder 4 machine this event log is getting my pc i want to compare my event log and universal forwarder ip address as where i receive so i use to lookup index="_internal" to get hostname and compare my event log host event log index index=*  EventCode=4624 the check index of the universal forwarder is index=_internal query: index=_internal fwdType=uf | table hostname sourceHost | rename hostname as uf_username sourceHost as uf_hostname | join sourceHost [search index=* EventCode=4624 Source_Network_Address=* Account_Name=Administrator Account_Domain=* | table Source_Network_Address Account_Name host] how to compare this and if the host name matches both indexes and get the ip address from index=_internal fwdType=uf sourceHost and  index=*  Source_Network_Address
 I'm looking at events and I'm trying to determine which files are not "deleted" from the folder on a server after files have been 'uploaded'. If the file is deleted it means it has been successfully... See more...
 I'm looking at events and I'm trying to determine which files are not "deleted" from the folder on a server after files have been 'uploaded'. If the file is deleted it means it has been successfully transferred. I'm able to use the 'transaction' command to determine the duration of a successful file transfer, however, I'm not able to figure out which files are stuck in the folder since the 'delete' event did not occur for some files. Help would be appreciated.  This is what i have so far, but needs fixing to determine which files are "stuck"...I think a join might be needed? index=main* ("Found new file" OR "Deleted file") | rex field=_raw "Found new file .*\\\\(?P<files>.*)\"}" | rex field=_raw "Deleted file (?P<files>.*)\"}" | transaction user files keepevicted=t mvlist=true startswith="Found new file" endswith="Deleted file" | table user files duration _raw | sort _time desc | where duration=0
Hi, How can I send an empty schedule report (no events in the search)?  I need to send a schedule report (daily) from an alert but sometimes there are no results. They need to see that csv report... See more...
Hi, How can I send an empty schedule report (no events in the search)?  I need to send a schedule report (daily) from an alert but sometimes there are no results. They need to see that csv report even is empty, but the visualization won't appear if there are no results. Did you know how can I do that? Just the table visualization with empty results/values. The fillnull don't work for this or am I using it wrong?   Thanks!