All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, We need your help related to file permission. We are installing the spunk agent on the AIX servers to read the log files. We don't want to add Splunk ID to the group for reading access. ... See more...
Hi Team, We need your help related to file permission. We are installing the spunk agent on the AIX servers to read the log files. We don't want to add Splunk ID to the group for reading access. We are trying with ACL command such as acledit, aclput etc to provide the read access to the path of the log. We have to try to provide access with these steps 1) export EDITOR=/usr/bin/vi 2) command: acledit dir 3) enabled permit r-- u:abc_id 4) to provide read access recursive under dir aclget DIRECTORY | aclput -R DIRECTORY Question: Is there any better way to provide read access to Splunk users to historical ? Could you please help us with how to provide the read access to all the historical files and new files with ACL?
We really love the AppD machine agent, which does a great job listing processes running on a virtual host. How can we create health alerts that use the process running state?    We want to create an... See more...
We really love the AppD machine agent, which does a great job listing processes running on a virtual host. How can we create health alerts that use the process running state?    We want to create an alert when a key process stops or goes down (unexpectedly). 
I have a table that shows the number of missing patches for our servers. I am trying to create a pie chart that will show what % of all of our servers are missing patches. I have tried entering a loo... See more...
I have a table that shows the number of missing patches for our servers. I am trying to create a pie chart that will show what % of all of our servers are missing patches. I have tried entering a lookup file that has a list of all our servers, and also added another sourcetype with that same list. I can't seems to figure out how I can combine the two so I can view the list of servers that need a patch against the total list of servers we have.
Creating Inputs for "Tenable.sc vulnerability". what would the "Enter the query name for Tenable.sc vulnerability filter" be?  Is there an example of a "query name for Tenable.sc vulnerability filte... See more...
Creating Inputs for "Tenable.sc vulnerability". what would the "Enter the query name for Tenable.sc vulnerability filter" be?  Is there an example of a "query name for Tenable.sc vulnerability filter" ? 
Hello, I'm seeing an issue where tstats search is slow due to an automatic lookup.  I'm running the searches over ranges where data model acceleration should be 100%. We do not receive constant str... See more...
Hello, I'm seeing an issue where tstats search is slow due to an automatic lookup.  I'm running the searches over ranges where data model acceleration should be 100%. We do not receive constant stream of data and only at batch schedule.  DMA is running over raw index which has automatic lookups. The output fields from the automatic lookups are defined in the DM. It is when I try to run searches using these output lookup fields, the tstats search doesn't appear to find anything in tsidx and tries to fall back to normal search. Example: | tstats c from datamodel=test_dm where test_dm.output_field_1 = 1 This search is very slow even though test_dm.output_field_1 is part of the data model. From the search.log, it seems to fall back to raw index search instead of the summaries data even though there should be something in the summaries of the tsidx. If i don't specify a specific value in the search, it runs fast as expected: | tstats c from datamodel=test_dm where test_dm.output_field_1 = * Also, it runs just as fast if I use summariesonly=t like this: | tstats summariesonly=t c from datamodel=test_dm where test_dm.output_field_1 = 1 Any other searches where the fields are not from automatic lookup and are from the raw index are fine such as this: | tstats c from datamodel=test_dm where test_dm.field1 = 1 I'm really confused why this is happening.  Filter at WHERE clause works fine with any other fields. Just the fields that are from automatic lookups seems to cause these weird fallback searches on raw index unless I use summariesonly=t or use wildcards as the value.   Does anyone have any idea?
I need to do a basic search to find when a computer was last logged on and any network traffic information based off of the ip address.
I want to combine several sources into one table and I'm using this search:   sourcetype="firstsourcetype" somefield="value" | head 50 | join uuid [search sourcetype="secondsourcetype"] | join uuid... See more...
I want to combine several sources into one table and I'm using this search:   sourcetype="firstsourcetype" somefield="value" | head 50 | join uuid [search sourcetype="secondsourcetype"] | join uuid [search sourcetype="thirdsourcetype"]   But if one of the sourcetypes doesn't exist, I don't get the data at all. I'm looking for a way to fix it. Thanks in advance  
Hi, We have a service account svc_account, that should log into certain servers (Server1, Server2, Server 3). How would we create an alert to notify if svc_account logs into a server other than Ser... See more...
Hi, We have a service account svc_account, that should log into certain servers (Server1, Server2, Server 3). How would we create an alert to notify if svc_account logs into a server other than Server1, Server2, Server 3? Thank you for your help. 
Has anyone connected AppDynamics to external status page applications? We track SLA metrics and would be interesting in reporting a simple green/yellow/red to a public page as we would not want to e... See more...
Has anyone connected AppDynamics to external status page applications? We track SLA metrics and would be interesting in reporting a simple green/yellow/red to a public page as we would not want to extend AppD externally but rather programmatically push the status metrics.
Hello, I have a table (simple XML) with a custom column to add a check button to select the rows and do some action on them. Basically, the custom render looks like this       var CustomRangeRe... See more...
Hello, I have a table (simple XML) with a custom column to add a check button to select the rows and do some action on them. Basically, the custom render looks like this       var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { // Enable this custom cell renderer for Actions field return _(["Actions"]).contains(cell.field); }, render: function($td, cell) { if (cell.field === "Actions") { $td.html('<label class="checkbox"><a href="#" class="btn customcheck"><i id="btnRadio" class="icon-check" style="display: none;"></i></a></label>'); } } });       Then of course I have the button on change listener, the action function etc. It works fine. Once the action is applied, it changes a value of another field, let's say the Status field, from "Open" to "Close", something like this :  Actions------IdCol------Status chkbox---------AAA--------Open chkbox--------BBB---------Close chkbox--------CCC--------Open This also works. Now, I'd like to disable the checkbox on the rows having status=Close. I've tried to access the rows and columns of the table through javascript using for exemple document.getElementById("#myTable"), looping on the cells value and adding "disabled" attribut, like this        $(document).ready(function() { var table = document.getElementById("myTable"); for (var row = 0, n = table.rows.length; row < n; r++) { for (var col = 0, m = table.rows[row].cells.length; col < m; col++) { if(table.rows[row].cells[col].innerHTML == "Close") { table.rows[row].cells[0].setAttribute("disabled", true); } } });       But since the table is rendered with Splunk component TableView, the document.getElementById() returned "undefined". What is the correct way to achieve this ?  
I am trying to ingest data into Splunk via Splunk HEC using a python script. I am also sending the data in batches. What should be the optimum size of the payload(data) that can be sent in a single ... See more...
I am trying to ingest data into Splunk via Splunk HEC using a python script. I am also sending the data in batches. What should be the optimum size of the payload(data) that can be sent in a single post request to optimize the performance of the ingestion script?
Can anyone advise on how to extract the fields in the following sample Eventlog Entry using xpath?  I can't see to get the syntax right since the Component field repeats.  I really want to pull all t... See more...
Can anyone advise on how to extract the fields in the following sample Eventlog Entry using xpath?  I can't see to get the syntax right since the Component field repeats.  I really want to pull all the fields under RequestAuditComponent type   ----------- SNIP -------------- 2/21/2020 09:26:55 AM LogName=Security SourceName=AD FS Auditing EventCode=1201 EventType=0 Type=Information ComputerName=ADFS.contoso.com User=ADFS_Svc Sid=S-1-2-19-1257343421-143158401-1347568098-54312 SidType=1 TaskCategory=3 OpCode=Info RecordNumber=36801840 Keywords=Audit Failure, Classic Message=The Federation Service failed to issue a valid token. See XML for failure details. Activity ID: 22cf1b39-bdc5-44ec-a216-411192f514d0 Additional Data XML: <?xml version="1.0" encoding="utf-16"?> <AuditBase xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="AppTokenAudit"> <AuditType>AppToken</AuditType> <AuditResult>Failure</AuditResult> <FailureType>GenericError</FailureType> <ErrorCode>N/A</ErrorCode> <ContextComponents> <Component xsi:type="ResourceAuditComponent"> <RelyingParty>http://fs.contoso.com/adfs/services/trust</RelyingParty> <ClaimsProvider>N/A</ClaimsProvider> <UserId>myuser@contoso.com</UserId> </Component> <Component xsi:type="AuthNAuditComponent"> <PrimaryAuth>N/A</PrimaryAuth> <DeviceAuth>false</DeviceAuth> <DeviceId>N/A</DeviceId> <MfaPerformed>false</MfaPerformed> <MfaMethod>N/A</MfaMethod> <TokenBindingProvidedId>false</TokenBindingProvidedId> <TokenBindingReferredId>false</TokenBindingReferredId> <SsoBindingValidationLevel>NotSet</SsoBindingValidationLevel> </Component> <Component xsi:type="ProtocolAuditComponent"> <OAuthClientId>N/A</OAuthClientId> <OAuthGrant>N/A</OAuthGrant> </Component> <Component xsi:type="RequestAuditComponent"> <Server>http://fs.contoso.com/adfs/services/trust</Server> <AuthProtocol>WSFederation</AuthProtocol> <NetworkLocation>Extranet</NetworkLocation> <IpAddress>10.0.0.1</IpAddress> <ForwardedIpAddress>10.0.0.1</ForwardedIpAddress> <ProxyIpAddress>N/A</ProxyIpAddress> <NetworkIpAddress>N/A</NetworkIpAddress> <ProxyServer>AZW-XADFS01</ProxyServer> <UserAgentString>Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:83.0) Gecko/20100101 Firefox/83.0</UserAgentString> <Endpoint>/adfs/ls/</Endpoint> </Component> </ContextComponents> </AuditBase>
Dear All, Greetings!!  I need your help,  Splunk server for log collector x.x.x.x port=y  can't receive data from all syslog sender that send data to this log collector server. And I have check th... See more...
Dear All, Greetings!!  I need your help,  Splunk server for log collector x.x.x.x port=y  can't receive data from all syslog sender that send data to this log collector server. And I have check the port by doing telnet  -->   telnet IP port  :     telnet x.x.x.x  y and It is not responding, what to do/check if you find this IP and port is not responding.....and this cause to not receive logs from all syslog sender.   Kindly help me how to troubleshoot this, Thank you in advance  
the company I work for has the splunk entreprise, is the use of MINT included? I would like to try the tool in our applications.
Hi Team, We are currently using 8.0.5 Splunk Enterprise. Only in the plain text emails, we could see some junk on the Subject like,  ?utf-8?q?SUBJECT?= We didn't faced this issue in older version... See more...
Hi Team, We are currently using 8.0.5 Splunk Enterprise. Only in the plain text emails, we could see some junk on the Subject like,  ?utf-8?q?SUBJECT?= We didn't faced this issue in older version (6.4.3). Could you please help?
Hi Team, We could pull day with date_wday - i tried few ways iam unable to display day along with date . Can you pls help on this. index=XXX source=*abc.log | rex field=_raw "- (?<uc>U(\d{8})) " ... See more...
Hi Team, We could pull day with date_wday - i tried few ways iam unable to display day along with date . Can you pls help on this. index=XXX source=*abc.log | rex field=_raw "- (?<uc>U(\d{8})) " | rex "[^\w](?<JOB>(?<env>[A-Z0-9@_#]+)\.[A-Z0-9@_#]+\.[A-Z0-9@_#]+\.(?<app>[A-Z0-9@_#]+\.[A-Z0-9@_#]+)\.[A-Z0-9@_#]+)" | search env=* app=* JOB=*** uc=*U00000001* | eval date=strftime(_time,"%d-%m-%Y") | stats count by date,JOB | xyseries JOB,date,count| addtotals row=true | sort - "Total" JOB 14-12-2020 15-12-2020 16-12-2020 17-12-2020 18-12-2020 19-12-2020 20-12-2020 21-12-2020 Total JOB1 1 1 2 1 2       7 JOB2   2 2 1 1       6 JOB3 1 1 1 1 1     1 6   Iam looking for output where i could display day along with date    JOB 11/12/2020(Friday) 12/12/2020(Saturday) 13-12-2020(Sunday) 14-12-2020(Monday) 15-12-2020(Tuesday) 16-12-2020(Wednesday) 17-12-2020(Thrusday) 18-12-2020(Friday) Total Job1   8 10           18 Job2 1 1
Hey all When Im creating a new scheduled search for customer .. there is any option to save the trigged alert to specific index or another place ? I don't want that the customer will see the rules a... See more...
Hey all When Im creating a new scheduled search for customer .. there is any option to save the trigged alert to specific index or another place ? I don't want that the customer will see the rules as fired and the notable triggered alert. I just want to check the rule as staging .. and get the triggered alert to another place that is not notable index or something that the customer can check. there is any solution that is not by sending  mail?
Hi, I have configured the REST API Modular Input to receive CSV data using the default handler and having "response_type = text" in inputs.conf. Now I am trying to make Splunk identify the header f... See more...
Hi, I have configured the REST API Modular Input to receive CSV data using the default handler and having "response_type = text" in inputs.conf. Now I am trying to make Splunk identify the header fields event sample:       Endpoint Name,Site,Last Logged In User,Group,Domain,Account,Console Visible IP,Agent Version,Last Active,Subscribed On,Health Status,Device Type,OS,OS Version,Architecture,Memory,CPU Count,Core Count,MAC Address,Management Connectivity,Network Status,Update Status,Scan Status,IP Addresses,Pending Uninstall,Disk Encryption,Vulnerability Status,Agent UUID,Agent ID,Customer Identifier,Console Migration Status,Locations,Agent Operational State 123,Servers,N/A,AWS - Citrix XenApp,CHN,123,54.211.215.107,4.3.2.86,2020-12-21T09:28:41.047625Z,2020-06-19T13:08:24.023922Z,Healthy,server,Windows,"Windows Server 2016 Datacenter,14393",64 bit,32 GB,8,8,"['01:61:81:ed:11:aa', '02:67:80:ed:11:aa', '02:67:80:ed:11:aa', '02:67:80:ed:11:aa']",Online,Connected,Up to date,Completed (2020-06-19T16:16:38.500116Z),"['10.11.118.141', 'fe80::d861:311:4109:ec4e', 'fe80::d81c:371:4109:ec4e', '10.222.122.116']",No,Off,Requires patching,83b3c93437b349a3b5c378ecadd11,917238114889702111,N/A,N/A,"['tt', 'ec']",Not disabled by the user 1223,Servers,N/A,AWS - Citrix XenApp,CHN,121,54.211.215.107,4.3.2.86,2020-12-21T09:28:41.047625Z,2020-06-19T13:08:24.023922Z,Healthy,server,Windows,"Windows Server 2016 Datacenter,14393",64 bit,32 GB,8,8,"['01:61:81:ed:11:aa', '02:67:80:ed:11:aa', '02:67:80:ed:11:aa', '02:67:80:ed:11:aa']",Online,Connected,Up to date,Completed (2020-06-19T16:16:38.500116Z),"['10.11.118.141', 'fe80::d861:311:4109:ec4e', 'fe80::d81c:371:4109:ec4e', '10.222.122.116']",No,Off,Requires patching,83b3c93437b349a3b5c378ecadd11,917238114889702111,N/A,N/A,"['tt', 'ec']",Not disabled by the user       The Rest API get the CSV file and it seems like Splunk cannot handle it as CSV: https://docs.splunk.com/Documentation/Splunk/8.1.1/Data/Extractfieldsfromfileswithstructureddata It does not work with modular inputs, network inputs, or any other type of input. Is this correct? If so, how do I let this csv file can be indexed as CSV file and identity the header fields correctly?  
Hello Splunkers, I have 6 splunk applications, and in total approx 800 REST Endpoint URLs. Now at the time of Application Maintenance how can i disable the inputs from these configured REST APIs? ... See more...
Hello Splunkers, I have 6 splunk applications, and in total approx 800 REST Endpoint URLs. Now at the time of Application Maintenance how can i disable the inputs from these configured REST APIs?  
Hi, I have a query which gives GroupName and its members in the below format   GroupName                    member Domain Admins  CN=firstname1\, lastname1 P0,OU=P0-Accounts,OU=test OU          ... See more...
Hi, I have a query which gives GroupName and its members in the below format   GroupName                    member Domain Admins  CN=firstname1\, lastname1 P0,OU=P0-Accounts,OU=test OU                                    CN=firstname2\, lastname2 P1,OU=P1-Accounts,OU=test OU                                    CN=firstname3\, lastname3 P3,OU=P3-Accounts,OU=test OU And im trying to extract it in multiple events like below seperately for each and every member GroupName                    member Domain Admins            CN=firstname1\, lastname1 P0,OU=P0-Accounts,OU=test OU  Domain Admins           CN=firstname2\, lastname2 P1,OU=P1-Accounts,OU=test OU  Domain Admins           CN=firstname3\, lastname3 P3,OU=P3-Accounts,OU=test OU