All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

36,03/26/20,13:12:04,Packet dropped because of Client ID hash mismatch or standby server.,IP,,B88584ADE973,,0,6,,,,,,,,,0 36,03/26/20,13:12:04,Packet dropped because of Client ID hash mismatch or s... See more...
36,03/26/20,13:12:04,Packet dropped because of Client ID hash mismatch or standby server.,IP,,B88584ADE973,,0,6,,,,,,,,,0 36,03/26/20,13:12:04,Packet dropped because of Client ID hash mismatch or standby server.,IP,,B88584ADE973,,0,6,,,,,,,,,0 11,03/26/20,13:12:04,Renew,IP,Oscarphone8,B841A4B2E9C8,,2541188417,0,,,,,,,,,0 11,03/26/20,13:12:04,Renew,IP,Oscarphone8,B841A4B2E9C8,,2541188417,0,,,,,,,,,0 31,03/26/20,13:12:04,DNS Update Failed,IP,xxx.jp,,,0,6,,,,,,,,,2 30,03/26/20,13:12:04,DNS Update Request,IP,xxx.jp,,,0,6,,,,,,,,,0 11,03/26/20,13:12:04,Renew,IP,xxx.jp,105BADA1EB91,,999576276,0,,,,0x4D53465420352E30,MSFT 5.0,,,,0 31,03/26/20,13:12:04,DNS Update Failed,IP,xxx.jp,,,0,6,,,,,,,,,2 30,03/26/20,13:12:04,DNS Update Request,IP,xxx.jp,,,0,6,,,,,,,,,0
When I go to the Performance Monitoring screen of the Splunk App Windows Infrastructure I only get data for CPU and Network Metrics. I am on 8.0.2.1 of Splunk, Splunk Add-on for Microsoft Windows ... See more...
When I go to the Performance Monitoring screen of the Splunk App Windows Infrastructure I only get data for CPU and Network Metrics. I am on 8.0.2.1 of Splunk, Splunk Add-on for Microsoft Windows 7.0.0, Splunk App for Windows Infrastructure 2.0.1 and using the current forwarder on the windows 10 machine sending data to the indexer. It looks like data is coming in but is in a different format than it use to be. It seems that the source no longer has the counter field. Am I missing something in the update?
So I have to update my datetime.xml file in Splunk because timestamp extraction problem after 1jan 2020. According to splunk we have to override new file provided from them to existing file. No... See more...
So I have to update my datetime.xml file in Splunk because timestamp extraction problem after 1jan 2020. According to splunk we have to override new file provided from them to existing file. Now my question: I have 10I, 20SH, 2HF, 1000's of UF. Do i need to update datetime.xml on just my Heavy forwarders? Do i need to update new datetime.xml on all indexers as well? If yes, Please help me how to push configuration from master. Thanks
As per the docs, issueReload determines whether the client's splunkd reloads after receiving an update and restartSplunkd determines whether the client's splunkd restarts after receiving an upd... See more...
As per the docs, issueReload determines whether the client's splunkd reloads after receiving an update and restartSplunkd determines whether the client's splunkd restarts after receiving an update. Can somebody please explain what happens internally in Splunk when client reloads vs client restarts ?
Hi Guys, What does maxHotBuckets does, Let's say if I don't set it, its value is 3 then will my indexer have 3 hot buckets every time or will it have depending on amount of data arrival. For... See more...
Hi Guys, What does maxHotBuckets does, Let's say if I don't set it, its value is 3 then will my indexer have 3 hot buckets every time or will it have depending on amount of data arrival. For ex: if an index is ingesting 500MB per day(summary index on real time index search) then it just uses one hot bucket instead of 3? any advice - highly appreciated. Pramodh
Hello guys, I am new here in splunk and in my first project I have to index logs from a remote server and I am doing this with db connect. My problem is that when I index all the data from this... See more...
Hello guys, I am new here in splunk and in my first project I have to index logs from a remote server and I am doing this with db connect. My problem is that when I index all the data from this server, the Time that is login into splunk is the Time when Splunk pull the data and I need to log these event by the time that they are generated. I figured out that when you pull data by the first time with db connect, you can indicate the timestamp by a column of the data base, luckily these db have a time column called "clock", but unforntunlly this time format is in epoch, like so: So my question is, what do I have to write over here?: I tryed with %s without any results. Thank you c:
All, I’m not getting consistent data within the dashboards and in some cases, no data in multiple dashboards within the Palo Alto App for Splunk. I could really use a sanity check! To cover the ... See more...
All, I’m not getting consistent data within the dashboards and in some cases, no data in multiple dashboards within the Palo Alto App for Splunk. I could really use a sanity check! To cover the basics: • Followed the installation Debian Linux instructions step by step. All seemed to be good. • Set up Splunk’s Data Input – UDP – 514 – pan:log • Have my PAN FW’s forwarding syslog to my Splunk instance. Data seems to be coming in fine. • To test I search for eventtype=pan and I’m getting good results. My /opt/splunk/etc/apps/Splunk_TA_paloalto/default/inputs.conf looks like [udp://514] connection_host = ip sourcetype = pan:log no_appending_timestamp = true disabled = 0 To make sure there weren’t conflicting inputs.conf I commented out: /opt/splunk/etc/apps /SplunkforPaloAltoNetworks/local/inputs.conf I check the entire file structure and user:group for all apps is splunk:splunk Thoughts?
A am trying to add a contextual input field to my dashboard and I seem to be having a hard time translating it into something splunk understands. The idea is to be able to use a dropdown menu to sele... See more...
A am trying to add a contextual input field to my dashboard and I seem to be having a hard time translating it into something splunk understands. The idea is to be able to use a dropdown menu to select between 0, 30, 60, 90 days. At which point all subsequent dashboards will exclude logs that have VulnerabilityPublishedDate earlier than the selection. I originally though i would give the drop down a token say $datemodifier$ and then add the below logic to each of my dashboards query's. But this does not seem to work In the dashboard i tried this. | eval OffsetTime = strftime(relative_time(now(),"-$datemodifier$d@d"), "%Y-%m-%d") This is my search index=stuff sourcetype="stuff" | eval Epoch_Time=strptime(VulnerabilityPublishedDate, "%Y-%m-%d") | eval stripTime=strftime(Epoch_Time, "%Y-%m-%d") | eval OffsetTime = strftime(relative_time(now(),"-30d@d"), "%Y-%m-%d") | where stripTime <= OffsetTime | table Epoch_Time stripTime VulnerabilityPublishedDate OffsetTime Sample output from this search: 1583798400.000000 2020-03-10 2020-03-10 00:00:00.0 2020-03-25 1583798400.000000 2020-03-10 2020-03-10 00:00:00.0 2020-03-25 1583798400.000000 2020-03-10 2020-03-10 00:00:00.0 2020-03-25 1583798400.000000 2020-03-10 2020-03-10 00:00:00.0 2020-03-25 1583798400.000000 2020-03-10 2020-03-10 00:00:00.0 2020-03-25 1583798400.000000 2020-03-10 2020-03-10 00:00:00.0 2020-03-25 1583798400.000000 2020-03-10 2020-03-10 00:00:00.0 2020-03-25 1583798400.000000 2020-03-10 2020-03-10 00:00:00.0 2020-03-25 1583798400.000000 2020-03-10 2020-03-10 00:00:00.0 2020-03-25
Unable to start the event service. it says below error. it says "java.lang.RuntimeException: java.net.BindException: Address already in use". "events-service-api-store.id" & "elasticsearch.id" file n... See more...
Unable to start the event service. it says below error. it says "java.lang.RuntimeException: java.net.BindException: Address already in use". "events-service-api-store.id" & "elasticsearch.id" file not available in Eventservice home folder.  Used command: bin/events-service.sh start  -p conf/events-service-api-store.properties Error: ompilerOracle: exclude org/apache/lucene/lucene54/Lucene54DocValuesConsumer.addSortedNumericField CompilerOracle: exclude org/apache/lucene/lucene54/Lucene54DocValuesConsumer.addBinaryField CompilerOracle: exclude org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/AbstractTDigestPercentilesAggregatorstart.collect CompilerOracle: exclude org/apache/lucene/index/SortedNumericDocValuesWriter.flush CompilerOracle: exclude org/apache/lucene/codecs/PushPostingsWriterBase.writeTerm 16:04:27.646 [main] INFO com.appdynamics.analytics.processor.AnalyticsService - Starting analytics processor with arguments [-p, /app/AppDynamics/platform/product/events-service/processor/conf/events-service-api-store.properties, -y, /app/AppDynamics/platform/product/events-service/processor/bin/../conf/events-service-api-store.yml] java.lang.RuntimeException: java.net.BindException: Address already in use at org.eclipse.jetty.setuid.SetUIDListener.lifeCycleStarting(SetUIDListener.java:213) at org.eclipse.jetty.util.component.AbstractLifeCycle.setStarting(AbstractLifeCycle.java:188) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:67) at io.dropwizard.cli.ServerCommand.run(ServerCommand.java:53) at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:44) at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:85) at io.dropwizard.cli.Cli.run(Cli.java:75) at io.dropwizard.Application.run(Application.java:93) at com.appdynamics.common.framework.AbstractApp.callRunServer(AbstractApp.java:270) at com.appdynamics.common.framework.AbstractApp.runUsingFile(AbstractApp.java:264) at com.appdynamics.common.framework.AbstractApp.runUsingTemplate(AbstractApp.java:251) at com.appdynamics.common.framework.AbstractApp.runUsingTemplate(AbstractApp.java:171) at com.appdynamics.analytics.processor.AnalyticsService.main(AnalyticsService.java:72) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:334) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:302) at org.eclipse.jetty.setuid.SetUIDListener.lifeCycleStarting(SetUIDListener.java:200) ... 12 more
Hi all, I am trying to get a count of all users signed into our VPN. While this is easy, i need it broken out based on the users role into Sysadmin, Students, and Employees. The catch is managemen... See more...
Hi all, I am trying to get a count of all users signed into our VPN. While this is easy, i need it broken out based on the users role into Sysadmin, Students, and Employees. The catch is management wants the Employees number to be a sum total of the Employees, Research, and Administrators, but NOT to include Sysadmins and students. All of these roles are results from the (you guessed it) "roles" field extraction. index=pulsesecure vendor_action=Closed OR vendor_action=ended OR vendor_action=succeeded OR "Logout" OR "Max session timeout" OR vendor_action=started | eval user = user. " " . src_ip | sort -_time | table user, roles, vendor_action, action, _time, src_ip | dedup user | search vendor_action=succeeded OR vendor_action=started | stats count(user) by roles This query gives us all the information that is being asked. I just need to get the 3 specific entries all added together from roles.
Is the dashboard chart UI element the right thing to use to create a query that will show 3 columns, configured as follows: column 1 shows values for result X, column 2 shows values for result Y, an... See more...
Is the dashboard chart UI element the right thing to use to create a query that will show 3 columns, configured as follows: column 1 shows values for result X, column 2 shows values for result Y, and column 3 shows the deltas between column 1 and column 2. Any pointers to examples? Thanks!
I am trying to compare 2 indexes (malicious domains against proxy logs) using an evaluated field. I have a subsearch which pulls from 2 fields (host and uri) and want to match it against a field (hos... See more...
I am trying to compare 2 indexes (malicious domains against proxy logs) using an evaluated field. I have a subsearch which pulls from 2 fields (host and uri) and want to match it against a field (host and uri) of the parent search. index=proxy_logs method=GET [inputlookup malicious_urls.csv | eval full_url=host.uri | table full_url] | eval full_url=host.uri | table full_url It is not returning any events, but it should as I'm using test data. I've tried putting the eval before the subsearch, which I assumed was the problem like this: index=proxy_logs method=GET | eval full_url=host.uri | search [inputlookup malicious_urls.csv | eval full_url=host.uri | table full_url] | table full_url This also doesn't return any results. Any recommendations? I will also take a solution that allows to return both the host and uri individually and compare against host and uri in the proxy logs, but couldn't find that solution either. I can successfully just match on one field, the host, but this is rather noisy as many of the domains are domain shorteners. Any help is appreciated. Thanks.
Hi everyone, I am trying to integrate ServiceNow with Splunk (8.0.1, distributed environment - SHC), however no luck so far (despite many hours spent trying out different configurations). Does ... See more...
Hi everyone, I am trying to integrate ServiceNow with Splunk (8.0.1, distributed environment - SHC), however no luck so far (despite many hours spent trying out different configurations). Does anyone know whether there are any documented issues with the Add-on (V.5.0.1) in combination with 8.0.1? Any other considerations worth a mention? Many thanks in advance, stay healthy, best regards
Not getting a successful connection to the api server, documentation is not complete
Hello I have a RPi 4 at home running Raspbian and I have the universal forwarder installed on it and logging data to be sent to the Splunk server on my VM. My question is would it be possible t... See more...
Hello I have a RPi 4 at home running Raspbian and I have the universal forwarder installed on it and logging data to be sent to the Splunk server on my VM. My question is would it be possible to control said raspberry pi from the server itself? By control I mean send a command or a script to the RPi that would change the current directory or something similar. I have a few apps running on the RPi and I would like to shutdown/restart etc. them from the Splunk server, without needing to manually log into the Pi itself. Thank you taking the time read this and I apologise if I didn't include enough details, it is my first question.
Hi , I tried to upgrade splunk universal forwarder from 7.0.2 to 8.0.2 and everything looks good , No error in splunkd logs Data is ingesting normally and all internal logs are also coming fine. ... See more...
Hi , I tried to upgrade splunk universal forwarder from 7.0.2 to 8.0.2 and everything looks good , No error in splunkd logs Data is ingesting normally and all internal logs are also coming fine. But when i look into migration.log and i saw these messages , is it be any problem ; [App Key Value Store migration] Binary for service(34) is missing. As far as i know this related to KV store migartion and splunk forwarder wont use it, Please if any one can help on this ?
Hello All , I have a json data format , which I am trying to import into splunk .I want to extract the timestamp from the last field value a multivalue field .For instance there is a field calle... See more...
Hello All , I have a json data format , which I am trying to import into splunk .I want to extract the timestamp from the last field value a multivalue field .For instance there is a field called appid which is a multivalue field with values 1573503539877 , 1573503539875,1573503539878,1573503539873 .I want to make the last value as the timestamp . The last timestamp for the multivalue field appid has the following format with closed flower brackets and a square bracket but the others have just a flower bracket MULTIVALUE FIELD "APPID" -first event apps: [ [-] { [-] addedById: 5d013c468 appId: 5d0d1fc13d418bdf5 dateAdded: /Date(1573503009489)/ MULTIVALUE FIELD APPID-last value which needs to be extracted addedById: 398 appId:ccaaadb dateAdded: /Date(1584128055615)/ } ]
Greetings, I hope you can help with the following. This query works when in streaming mode, but fails when the job switches to map/reduce: index=hdpidx event.original="*\"EventId\":\"4776\"... See more...
Greetings, I hope you can help with the following. This query works when in streaming mode, but fails when the job switches to map/reduce: index=hdpidx event.original="*\"EventId\":\"4776\"*" | rex field=event.original "TargetUserName\":\"(?.+?)\"" | ldapfilter search="(sAMAccountNAme=$foo$)" attrs="name" | table foo, name It will begin returning results, but once it switches to map/red we see the following error repeated for multiple hadoop worker nodes: [hdpprov] [w001.cluster] External search command 'ldapfilter' returned error code 1. Script output = "error_message=TypeError at "/tmp/splunk/splunk.hdpidx/splunk/var/run/searchpeers/splunk.hdpidx-1585082701/apps/SA-ldapsearch/bin/packages/splunklib/binding.py", line 478 : int() argument must be a string or a number, not 'NoneType' ". [hdpprov] [w003.cluster] External search command 'ldapfilter' returned error code 1. Script output = "error_message=TypeError at "/tmp/splunk/splunk.hdpidx/splunk/var/run/searchpeers/splunk.hdpidx-1585082701/apps/SA-ldapsearch/bin/packages/splunklib/binding.py", line 478 : int() argument must be a string or a number, not 'NoneType' ". ... error repeated for each of the worker nodes participating in the job ... I do not believe this is an issue with ldapfilter, but with an interaction between $var$ expansion and hunk. If I add a head 10 directly after the initial search (before the rex), the search runs successfully - IMHO pointing to the issue only occurring during map-reduce. I have verified worker node network access the AD servers. Could this be an escaping issue where $foo$ is being replaced with a (non-existent) environment variable? Hence the NoneType? Any thoughts would be much appreciated!
I did tried with below query where as i am getting action results edit but i am not able see what is edited like deep dive result. Basically i need to see if anyone in the roles edited, added and del... See more...
I did tried with below query where as i am getting action results edit but i am not able see what is edited like deep dive result. Basically i need to see if anyone in the roles edited, added and deleted something in splunk . index=_audit user!=splunk-system-user user!="n/a" (action=edit OR action=create OR action=delete) | table _time user, action info host Result Table: Date&time: aaaaaaaaa user: AAAAAA action: edit_deployment_client, edit_user(This result i need to see what is edited by user in deep dive result) host: BBBBBBBB Thanks in adavance
I did tried with below query where as i am getting action results edit but i am not able see what is edited like deep dive result. Basically i need to see if anyone in the roles edited, added and del... See more...
I did tried with below query where as i am getting action results edit but i am not able see what is edited like deep dive result. Basically i need to see if anyone in the roles edited, added and deleted something in splunk . index=_audit user!=splunk-system-user user!="n/a" (action=edit OR action=create OR action=delete) | table _time user, action info host Result Table: Date&time: aaaaaaaaa user: AAAAAA action: edit_deployment_client, edit_user(This result i need to see what is edited by user in deep dive result) host: BBBBBBBB Thanks in adavance