All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

The problem is that I have duplicate hosts under the Data Summary. I can see that some of them were last seen may 13 I know that because at one point 4 hosts where sending their data to index=main. B... See more...
The problem is that I have duplicate hosts under the Data Summary. I can see that some of them were last seen may 13 I know that because at one point 4 hosts where sending their data to index=main. But now the issue is when I corrected the index for them to send to it created duplicate hosts that stopped sending to main. How do I remove the stale hosts and will this affect the data?
I have a Splunk Enterprise instance (v7.3.4) and I am wondering if there is a way to completely disable ProxyConfig in server.conf? Every time the software restarts there's 4 informational logs in sp... See more...
I have a Splunk Enterprise instance (v7.3.4) and I am wondering if there is a way to completely disable ProxyConfig in server.conf? Every time the software restarts there's 4 informational logs in splunkd.log related to the 4 proxy settings (http_proxy, https_proxy, proxy_rules, and no_proxy), but I don't really care since I won't be enabling any outside communication. Is this required behavior or did I do something to trigger these startup messages?
Is there a way to display events in a table when the same value appears multiple times with other values? I am looking for user accounts that appear on two or more systems. The following is a li... See more...
Is there a way to display events in a table when the same value appears multiple times with other values? I am looking for user accounts that appear on two or more systems. The following is a list of records: field1 | field2 | field3 sys1 | user1 | somevalue1 <<-- Want to grab this row sys2 | user2 | somevalue2 sys2 | user2 | somevalue3 sys2 | user2 | somevalue4 sys2 | user1 | somevalue2 <<-- Want to grab this row I have been trying different queries based off of the follow but I cant seem to get the correct syntax. I can get a count on field 1 and/or field 2, but I am not able to pull just those events listed above and the attributes accompanying (field3, field4, etc.) the events. 1. index="myindex" 2. | stats count by field1 field2 3. | where count > 1 4. | table count field1 field2 field3 5. | sort - count
I've got the following search to identify when a user has more than 20 auth failures. I'm trying to find a way to remove additional logs of users who have less than 20 auth failures from the Event... See more...
I've got the following search to identify when a user has more than 20 auth failures. I'm trying to find a way to remove additional logs of users who have less than 20 auth failures from the Events tab. For example, I might see in the Statistics tab 1 result indicating that a single user failed 135 times. However in the Events tab I see 145 logs which include 10 additional auth failures of other users that failed less than 20 times. I only want to see 135 logs in the Events tab corresponding to the 135 results from "| search TotalAuthFailures >= 20". This is so when analysts are drilling down on the alert they're not confused by additional users in the Events raw logs. How can I do this? index=main sourcetype="wineventlog" EventCode=4625 (Sub_Status=0xC000006A OR Sub_Status=0xC0000064) | eval match=if(match(Account_Name,".*\$"),1,0) | eval Description=if(Sub_Status=="0xC0000064","User name does not exist.","User name is correct but the password is wrong.") | where match=0 | stats count by user,src_ip,src_nt_host,Description | rename count AS "TotalAuthFailures" user AS "User (Origin)" src_ip AS "Source IP Address" src_nt_host AS "Host (Origin)" EventCode AS "Event ID" | dedup "User (Origin)" | search TotalAuthFailures >= 20
Hi, I configured a Splunk enterprise indexer to monitor active directory. That worked without issues, it found my domain controllers right away. I also configured the forwarders conf file prope... See more...
Hi, I configured a Splunk enterprise indexer to monitor active directory. That worked without issues, it found my domain controllers right away. I also configured the forwarders conf file properly, but I'm not seeing any data in Splunk. Netstat shows that the indexer is listening in 9997 . Netstat also shows that the domain controller running the forwarder is connected to the indexer in 9997 . But still no data. Can someone please help?
My events looks like this: REQUEST_NAME is the common field that ties both request and services. LogType=REQUEST status=200 REQUEST_NAME=XXXX URI=REQ1 LogType=SERVICE status=200 R... See more...
My events looks like this: REQUEST_NAME is the common field that ties both request and services. LogType=REQUEST status=200 REQUEST_NAME=XXXX URI=REQ1 LogType=SERVICE status=200 REQUEST_NAME=XXXX URI=SER1 LogType=SERVICE status=200 REQUEST_NAME=XXXX URI=SER2 LogType=SERVICE status=200 REQUEST_NAME=XXXX URI=SER3 LogType=REQUEST status=200 REQUEST_NAME=YYYY URI=REQ2 LogType=SERVICE status=200 REQUEST_NAME=YYYY URI=SER1 I want to see if this display is possible in a table format. Thanks for the help. XXXX 200 SER1 200 SER2 200 SER2 200 YYYY 300 SER1 200
Hi, How do I monitor avg. disk/sec read & avg. disk/sec write on a database server? It is not listed in the metric browser. These counters measure the average amount of time in secs required to... See more...
Hi, How do I monitor avg. disk/sec read & avg. disk/sec write on a database server? It is not listed in the metric browser. These counters measure the average amount of time in secs required to service a read or write request.  The only metric available is disk reads/sec & disk writes/sec which measures the rate of operations which is a different measure.  ^ Edited by @Ryan.Paredez for an improved title
I have a huge XML file with many tiers. I use this command to limit the number of events for the XML data that I want to extract, then I use xmlkv to extract the XML fields. The fields that I need a... See more...
I have a huge XML file with many tiers. I use this command to limit the number of events for the XML data that I want to extract, then I use xmlkv to extract the XML fields. The fields that I need are extracted but not all of the data is pulled. This is the command: index=83261 source="service.log" sourcetype="dispatchapp" "RULE" "createMessage MsgSource" | xmlkv This is a partial example of the XML file: PURCHASEDLINEHAUL DISPATCH 2020-05-21T17:22:55.000Z <ns2:numberCode>923</ns2:numberCode> <ns2:numberType>2</ns2:numberType> </origin> <destination> <ns2:numberCode>72</ns2:numberCode> <ns2:numberType>2</ns2:numberType> </destination> <purchasedCost> <purchasedCostTripSegment> <purchCostReference>1587040</purchCostReference> <carrier>FXTR</carrier> <vendorType>DRAY</vendorType> <carrierTrailerType>PZ1</carrierTrailerType> <origin> <ns2:numberCode>923</ns2:numberCode> </origin> <destination> <ns2:numberCode>4022</ns2:numberCode> </destination> </purchasedCostTripSegment> <purchasedCostTripSegment> <purchCostReference>1587040</purchCostReference> <carrier>BNSF</carrier> <vendorType>RAIL</vendorType> <carrierTrailerType>PZ1</carrierTrailerType> <origin> <ns2:numberCode>4022</ns2:numberCode> </origin> <destination> <ns2:numberCode>4040</ns2:numberCode> </destination> </purchasedCostTripSegment> <purchasedCostTripSegment> <purchCostReference>1587040</purchCostReference> <carrier>NS</carrier> <vendorType>RAIL</vendorType> <carrierTrailerType>PZ1</carrierTrailerType> <origin> <ns2:numberCode>4061</ns2:numberCode> </origin> <destination> <ns2:numberCode>4040</ns2:numberCode> </destination> </purchasedCostTripSegment> </purchasedCost> This image shows that the ns:numberCode is extracted buy only 3 but there are 5 instances in the view above. But the entire xml there are many more. How can I get the rest of the instances in the field extraction, and how can I identify the path where these values are coming from in the XML?
I'm getting this error when testing out importing a metric from CSV data: The metric value= is not valid for source=pdi_kinesis.csv, sourcetype=kinesis_metrics_csv, index=dev_metrics. Metric eve... See more...
I'm getting this error when testing out importing a metric from CSV data: The metric value= is not valid for source=pdi_kinesis.csv, sourcetype=kinesis_metrics_csv, index=dev_metrics. Metric event data with an invalid metric value cannot be indexed. Ensure the input metric data is not malformed. The only thing I can think of that might cause this is nulls in the data? ie ,, do I need to have a zero in there??
This is my search query for my alert. index=test EventCode=4625 | eval Account_Name=mvindex(Account_Name, -1) | search NOT Account_Name="BENQ$" NOT Account_Name="-" | stats count by Account_Name ... See more...
This is my search query for my alert. index=test EventCode=4625 | eval Account_Name=mvindex(Account_Name, -1) | search NOT Account_Name="BENQ$" NOT Account_Name="-" | stats count by Account_Name | where count >= 2 So the alert will trigger if a person fails to login 2 times or more. The PDF shows a the username (johnsmithnull) but when opening it in the table it shows johnsmith and the count of how many times. Is Johnsmithnull a title the gets appended by splunk?
In an attempt to speed up long running searches I Created a data model (my first) from a single index where the sources are sales_item (invoice line level detail) sales_hdr (summary detail, type of s... See more...
In an attempt to speed up long running searches I Created a data model (my first) from a single index where the sources are sales_item (invoice line level detail) sales_hdr (summary detail, type of sale) and sales_tracking (carrier and tracking). Skipping a lot of detail and back story.. I got this search working with goal being to bring in POType from the sales_hdr, tracking info from sales_tracking and plug it in to each item line from sales_item based on Order which is common to all three nodes in the DM. |tstats summariesonly=t dc(sales_item.Material) as MaterialCount dc(sales_item.OrderLine) as OrderLineCount latest(sales_item.OrderLineStatus) as OrderLineStatus latest(sales_item.DateCreated) as DateCreated latest(sales_item.OrderLine) as OrderLine from datamodel=SCM.sales_item where sales_item.DateCreated=* AND sales_item.Order=0137737819 by sales_item.Order sales_item.Material | rename sales_item.Order AS Order | eventstats sum(OrderLineCount) as OrderLineSum sum(MaterialCount) as MaterialSum by Order | appendcols [ | tstats summariesonly=t latest(sales_hdr.POType) AS POType from datamodel=SCM.sales_hdr where sales_hdr.CreationDateHdr=* AND sales_hdr.Order=0137737819 by sales_hdr.Order | rename sales_hdr.Order AS Order] | eventstats last(POType) as POType by Order |appendcols [| tstats summariesonly=t latest(sales_tracking.CarrierName) AS CarrierName latest(sales_tracking.TrackingNumber) AS TrackingNumber from datamodel=SCM.sales_tracking where sales_tracking.Order=0137737819 by sales_tracking.Order sales_tracking.OrderLine | rename sales_tracking.Order AS Order | eventstats latest(CarrierName) as CarrierName latest(TrackingNumber) as TrackingNumber by sales_tracking.OrderLine] | streamstats count as Row by Order This works great for a single order. But, as soon as I change the nodename.Order=0137737* in all three tstats lines this search returns empty columns, see output in the picture below. I know some of the "sum" columns in here are a little redundant but I am using them to validate the results of the search. So my questions are. Do I need use appendcols? If not, how would you bring in these other fields into SCM.sales_item from SCM.sales_hdr and SCM.sales_tracking? If I do need the appendcols, where am I screwing this up? I suspect it has something to do with the "by" clauses being slightly different in each tstats. I have to "by" it out by Order and Material in sales_item to get the individual rows of the invoice, and same thing in tracking to get each individual tracking number. Output of a * order number search. Where in this case I know that 0137737819 POType is Email. And If I search only that Order in all three tstats lines I get the expected result. Thanks for reading.. I'll be here scratching my head. One last note. I have tried using an |eval Order=coalesce to combine Order but this fails worse than the rename I am doing here.
Hello All, We have a splunk distributed environment with intermediate heavy forwarder tier and indexer tier. We need to implement HEC in our current environment which will include how to write t... See more...
Hello All, We have a splunk distributed environment with intermediate heavy forwarder tier and indexer tier. We need to implement HEC in our current environment which will include how to write to multiple indexes with a single token and ensure some level of resiliency.Please let me know what will be the best approach for this.
I am trying to test Scripted Input according to steps mentioned here - docs splunk com /Documentation/SplunkCloud/8.0.2004/Data/Getdatafromscriptedinputs The problem is that trial instance only s... See more...
I am trying to test Scripted Input according to steps mentioned here - docs splunk com /Documentation/SplunkCloud/8.0.2004/Data/Getdatafromscriptedinputs The problem is that trial instance only shows HTTP Event Collector What am I doing wrong? Or am I expecting something that Trial instance doesn't provide? Thanks in advance!
We need to move our archives to different storage and I'm looking for a way to blast this out to our 48 indexers all at once rather than needing to walk through it indexer by indexer. I want to make... See more...
We need to move our archives to different storage and I'm looking for a way to blast this out to our 48 indexers all at once rather than needing to walk through it indexer by indexer. I want to make sure we don't lose any in-flight writes while I swap the NFS mounts for the archive directory on each indexer and I can't see a way to do that outside of stopping Splunk on the node. It doesn't look like maintenance mode will do this. Best I have is for each node: 1. Drop cluster to maintenance mode 2. splunk stop on node 3. mount old archive to temp location 4. mount new archive to permanent location 5. splunk start Then at the end I can kick off a script to move all the old data off to the new location. Is there a magically way to tell Splunk to just stop archiving for a few minutes so I can make the mount swap?
Can anyone assist how to resolve this error, we are using self signed certs from idP and default certs in Splunk. Configurations on both end looks fine and no errors on idP end splunkd errors: -04... See more...
Can anyone assist how to resolve this error, we are using self signed certs from idP and default certs in Splunk. Configurations on both end looks fine and no errors on idP end splunkd errors: -0400 ERROR UiSAML - Verification of SAML assertion using the IDP's certificate provided failed. Error: Failed to verify signature with cert -0400 ERROR Saml - Unable to verify Saml document ERROR Saml - Error: Failed to verify signature with cert :/home/splunk/etc/auth/idpCerts/idpCert.pem; Error on the UI: This after entering credentials in the Single Sign on URL Verification of SAML assertion using the IDP's certificate provided failed. Error: Failed to verify signature with cert
Hi all, Hoping someone can give some pointers how to solve this problem: I run a transaction command on the last two weeks, which gives about 20.000 events, and for about 85 percent of events ... See more...
Hi all, Hoping someone can give some pointers how to solve this problem: I run a transaction command on the last two weeks, which gives about 20.000 events, and for about 85 percent of events the transaction command combines the events perfectly. However, for the remaining 13% there are still duplicate 's meaning that the transaction command has not combined them properly. I think this is due to memory limits in the limits.conf and these could be increased, but it seems that there should be smarter options. For example appending new events with a transaction command on an existing lookup if that is possible. Or perhaps there is a better way of combining the information without using transaction at all. The downside of the dataset is that transactions can occur over the entire two weeks; which means I cannot filter on maxspan, also filters on maxevents don't improve performance since the transactions can vary a lot. Cheers, Roelof the minimal search: index= sourcetype= earliest=@d-14d | fields ... | transaction keeporphans=True keepevicted=True | outputlookup .csv This is the full minimal search ^ Two examples of the snippets from the correct dataset would be: (id number deleted, but just an integer on which transaction is performed) SYSMODTIME is a multivalue field, and there are a couple more mv fields in the complete dataset
I need to monitor all file reads, writes, deletes, etc. on a SMB share from a Windows server. In the past, I've just turned on full file auditing on the folder in question and used the Splunk Univers... See more...
I need to monitor all file reads, writes, deletes, etc. on a SMB share from a Windows server. In the past, I've just turned on full file auditing on the folder in question and used the Splunk Universal Forwarder to grab those events and it worked great. However, I'm not sure how to complete that with a SMB share. I've looked at the forums and I see people referencing fschange but that appears to be been deprecated so I'd like to go the normal Windows logging route. Questions If I turn on file auditing on \smbshareserver\share1 and mount it to server1.company.com, would all of the file access attempts be logged to local Event Log even if another server mounts that share? I would think no and therefore defeats the purpose. Does the Isilon storage have it's own log file that would write this somewhere else and I could grab it there? Is there a better/easier solution to do this? The whole goal is I need to fully monitor this SMB share from one location even though lots of computers and users could access it.
We are trying to setup a sysevent filter for the name attributre. We have more than 1 name attribute and have setup the filter like this: filter_data = name=login&name=logout&name=login.failed&name... See more...
We are trying to setup a sysevent filter for the name attributre. We have more than 1 name attribute and have setup the filter like this: filter_data = name=login&name=logout&name=login.failed&name=impersonation.start&name=impersonation.end&name=security.elevated_role.enabled&name=security.elevated_role.disabled But, it doesn't work as expected. The issue we saw appears to be that you can’t specify multiple values for the same column name using the &. So name=login&parm1=myid works but name=login&name=logout (repeating the same column multiple times) does not. The url would need to specify a list of values for the name column instead of repeating it. What is the proper syntax for adding the filters?
I have a certain host that sends several logs from multiple sources using the Linux Universal Forwarder. Most of these logs are written in the host and then to Splunk as UTC although the host is conf... See more...
I have a certain host that sends several logs from multiple sources using the Linux Universal Forwarder. Most of these logs are written in the host and then to Splunk as UTC although the host is configured with the correct local time. How do I get Splunk to display the local time zone instead of UTC?
Hello, We have some custom timers in the code, and I wanted to create a search based on the timer name. This is what I did: SELECT timer FROM mobile_session_records WHERE timer.timername is not nu... See more...
Hello, We have some custom timers in the code, and I wanted to create a search based on the timer name. This is what I did: SELECT timer FROM mobile_session_records WHERE timer.timername is not null AND timer.timername = "SaveItem" However, the result is not filtering properly by timername. It is showing me entries for other timer names. Then I tried this: SELECT timer.timername FROM mobile_session_records WHERE timer.timername is not null AND timer.timername = "SaveItem" And it looks like timer name is an array with a bunch of timer events in it, which is strange. This is an example of the response I got: ["SaveItem"] ["LoadItem", "SaveItem"] ["Login", "SaveItem"] Why does timer name contain multiple timer events? Has anyone tried to create searches based on custom timers? How did you do it?