All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a table with formatted something like this: 1 John, Smith, a123, superuser, blah 2 John, Smith, a123, audit user, blah 3 Sally, Smith, a234, regular user, blah 4 Andy, Smith, a345, audit ... See more...
I have a table with formatted something like this: 1 John, Smith, a123, superuser, blah 2 John, Smith, a123, audit user, blah 3 Sally, Smith, a234, regular user, blah 4 Andy, Smith, a345, audit user, blah 5 Andy, Smith, a345, log user, blah 6 Andy, Smith, a345, super user, blah When you run the lookup for the user id (so like a123), you get both results on two lines within the same box in the table. I want one single line that has the user type concatenated. So instead of: a123, super user a123, audit user I want: a123, "super user, audit user" Is that possible?
I am new to Splunk and I need to display my data in typical TV guide format. X axis is list of channels Y axis is timeline with scroll bar to go left and right of the current time. Each row o... See more...
I am new to Splunk and I need to display my data in typical TV guide format. X axis is list of channels Y axis is timeline with scroll bar to go left and right of the current time. Each row of the line is a variable length rectangles that show the program. What is the best way to achieve this in splunk dashboard?
It has been a while since I have worked with Linux, but doing my best to refresh my knowledge. Successfully installed the latest forwarder on Ubuntu and it has actually phoned home and the deployment... See more...
It has been a while since I have worked with Linux, but doing my best to refresh my knowledge. Successfully installed the latest forwarder on Ubuntu and it has actually phoned home and the deployment server has pushed config to it. But now it has stopped working. I have configured it to run as user 'splunk', not as root. This has caused some issue, for instance when I just now did run splunk@myserver:~$ ./bin/splunk display deploy-client Pid file "/opt/splunkforwarder/var/run/splunk/splunkd.pid" unreadable.: Permission denied Pid file "/opt/splunkforwarder/var/run/splunk/splunkd.pid" unreadable.: Permission denied Operation "ospath_fopen" failed in /opt/splunk/src/libzero/conf-mutator-locking.c:337, conf_mutator_lock(); Permission denied Did sudo to root and /opt/splunkforwarder# chown -R splunk:splunk * Error did go away, but now when running the same command (as Splunk) nothing happens. I must CTRL+C to "get out of it" splunk@myserver:~$ ./bin/splunk display deploy-client ^C splunk@myserver:~$ Most likely more a basic Linux quesiton, but still, anyone who has an idea of what could be wrong? Update And now I did try splunk@myserver:~$ ./bin/splunk list forward-server Cannot initialize: /opt/splunkforwarder/etc/apps/learned/metadata/local.meta: Permission denied Cannot initialize: /opt/splunkforwarder/etc/apps/learned/metadata/local.meta: Permission denied Cannot initialize: /opt/splunkforwarder/etc/apps/learned/metadata/local.meta: Permission denied Since I did do the chown, this should not happen, so quite sure that I've done something not totally correct when installing as root and then switching to Splunk as described here https://docs.splunk.com/Documentation/Splunk/8.0.2/Admin/ConfigureSplunktostartatboottime#Enable_boot-start_as_a_non-root_user - well, it is simply just the chown-command, but since splunk@myserver:~$ ls -la /opt/splunkforwarder/etc/apps/learned/metadata/local.meta -rw------- 1 root root 531 Mar 8 19:15 /opt/splunkforwarder/etc/apps/learned/metadata/local.meta Something is not correct on the server.
I have installed a universal-forwarder on a Ubuntu Linux box without error, here is some validation: Splunk list forward-server Active forwards: input-prd-p-xxxxxxxxxx.cloud.splunk.com:999... See more...
I have installed a universal-forwarder on a Ubuntu Linux box without error, here is some validation: Splunk list forward-server Active forwards: input-prd-p-xxxxxxxxxx.cloud.splunk.com:9997 (ssl) The forward does show up in monitor, but when I get to add the Forwarder under Settings -> Data. It doesn't show any forwarders available and show the refresh button. I did also download and copy Splunk for Linux under /opt/splunkforwarder/etc/apps/Splunk_TA_linux as first goal is to get performance data into the cloud. Thank You!
Based on the time picker & time modifier token i am displaying the time values in a human readable format in a label. For this command i am getting the proper results. | makeresults | eval l... See more...
Based on the time picker & time modifier token i am displaying the time values in a human readable format in a label. For this command i am getting the proper results. | makeresults | eval latest1="1583038799.000",earliest1="1567310400.000" | eval latest2=strftime(latest1,"%Y-%m-%d %H:%M:%S"),earliest2=strftime(earliest1,"%Y-%m-%d %H:%M:%S") But if i try it in time modifier i am not getting the same result i am not sure it is because of time zone ..? ![<search> <query> |makeresults | addinfo </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <done> <eval token="Tearliest">strftime($result.info_min_time$,"%Y-%m-%d %H:%M:%S")</eval> <eval token="Tlatest">strftime($result.info_max_time$,"%Y-%m-%d %H:%M:%S")</eval> </done> </search>][2] Both the places i am using the same code getting different results. any thoughts.. Thanks in advance....
I have a Deploy server application that I use to control my "SYSLOG" server that receives logs from various other sources. The SYSLOG server has a SPLUNK UF installed on it and it sends the data to ... See more...
I have a Deploy server application that I use to control my "SYSLOG" server that receives logs from various other sources. The SYSLOG server has a SPLUNK UF installed on it and it sends the data to configured indexes with the relevant source types. I have a range of data sources in this app to direct where my data goes. The UF is effectively monitoring for files in a directory structure. For example [monitor:///data/splunkforwarder/myfiles/app1/*/messages*] host_segment = 5 sourcetype = app1_sourcetype index = app1 [monitor:///data/splunkforwarder/myfiles/app2/*/messages*] host_segment = 5 sourcetype = app2_sourcetype index = app2 I have a monitor input that is using the standard JSON provided by SPLUNK for another directory [monitor:///data/splunkforwarder/myfiles/BIGAPP/*/messages*] host_segment = 5 sourcetype = json index = bigapp BIGAPP sends its logs via SYSLOG and this works as expected, however the time that is indexed in SPLUNK is out by 8 hours. The event arrives at say 8:45pm but SPLUNK indexes this at 12:45 (difference of 8 hours). I attempted to do the following: [monitor:///data/splunkforwarder/myfiles/BIGAPP/*/messages*] host_segment = 5 sourcetype = json index = bigapp TZ = Australia/Perth I reloaded my DS and resent a log but this made no difference. From reading the articles, it would seem to indicate that this must only be done in props.conf? Do I have to create a new sourcetype (effectively duplicating the JSON sourcetype) and then apply this props to my SYSLOG application? I don't want impact my app as all of the other monitored files are accurate from a time stamp perspective so I only need to change this one. The BIGAPP vendor does not have support for changing the time zone on the syslog so I have to resort to having SPLUNK fix this. Thanks for any assistance.
Hello Community, I evaluate the values of a single field which comes with values such as: OUT; IN; DENIED and can get counters for each of those values. Now I want to subtraction "OUT" minus "IN... See more...
Hello Community, I evaluate the values of a single field which comes with values such as: OUT; IN; DENIED and can get counters for each of those values. Now I want to subtraction "OUT" minus "IN" ( or maybe even minus "DENIED") index="application-license" sourcetype=application License_User_device=* License_feature_status=* License_user=* | fields _time,License_user,License_User_device, License_feature_status,License_feature,tag,eventtype, | eval User=(License_user) | eval LicenseTaken-OUT=mvfilter(match(License_feature_status, "OUT$")) | eval LicenseTaken-IN=mvfilter(match(License_feature_status, "IN$")) | eval LicenseTaken-DENIED=mvfilter(match(License_feature_status, "DENIED$")) | eval LicenseTaken=(License_feature_status) | eval LicenseTaken-AVG=mvfilter(match(License_feature_status, "OUT$") OR match(License_feature_status, "IN$") ) | eval License_feature=(License_feature) | eval Time=strftime(_time, "%d-%m-%Y %H:%M:%S") | bucket Time span=100d | timechart count(LicenseTaken-OUT) as "Application-LicenseTaken(OUT)" count(LicenseTaken-IN) as "Application-LicenseTaken(IN)" count(LicenseTaken-DENIED) as "Application-License-DENIED" count(LicenseTaken) as "Application-License Taken(sum)" count(LicenseTaken-AVG) as "License_avg" | predict License_avg algorithm=LLT upper40=high lower40=low future_timespan=45 holdback=3 in above sample ... I like to implement: | eval LicenseTaken=(License_feature_status) - | eval field1=mvfilter(match(field, "IN$")) or .... mvfilter(match(License_feature_status, "OUT$")) MINUS mvfilter(match(License_feature_status, "IN$")) Field substraction wasn't working; turned back always 0 (zero) Any ideas? thanks in advance Kai
Hi Splunkers, Splunk suggests to extract fields at forwarders for structured data, why? and what if i have field names in the log / no filed field names in the log? I have a confusion that whet... See more...
Hi Splunkers, Splunk suggests to extract fields at forwarders for structured data, why? and what if i have field names in the log / no filed field names in the log? I have a confusion that whether my license usage get affected by structured field extraction at index time/ at forwarders. I understand that splunk license counts against what you index , so if i do indexed field extractions then those field value pairs will be added to _raw and cause license usage, is that correct? For unstructured data Splunk suggests us to do extraction at search time?. I'm clear with these but sometimes not,. any advises will be appreciated.. Pramodh B
Hi All, I have encountered a miss match between the license EPD of the ES and the | tstats count command of the same index. FYI the EPD is based on the _internal metrics.log data. The number... See more...
Hi All, I have encountered a miss match between the license EPD of the ES and the | tstats count command of the same index. FYI the EPD is based on the _internal metrics.log data. The number of events i can see with the tstats command is much lower than the number in the _internal metrics.log . Can anyone please help me understand the reason for this ? Thanks !
Hi, I have installed the javaagent on my Linux server, and I append javaagent script after my startup script. The whole script: "$JAVA_HOME/bin/java -jar /tmp/my-api.jar -ms64m -mx256m -javaagent... See more...
Hi, I have installed the javaagent on my Linux server, and I append javaagent script after my startup script. The whole script: "$JAVA_HOME/bin/java -jar /tmp/my-api.jar -ms64m -mx256m -javaagent:/opt/appdyn/javaagent/4.4.3.22593/javaagent.jar -Dappdynamics.controller.ssl.enabled=true -Dappdynamics.force.default.ssl.certificate.validation=true -Dappdynamics.agent.accountName=customer1 -Dappdynamics.agent.accountAccessKey=........ -Dappdynamics.controller.hostName=........ -Dappdynamics.controller.port=..... -Dappdynamics.agent.applicationName=........ -Dappdynamics.agent.tierName=my-api -Dappdynamics.agent.nodeName=_my-api" my-api is a spring boot web application. After running this script, seems there is no log generated.
index=_internal | eventstats count by sourcetype | where count > 100 | timechart span=1m count by sourcetype note:earliest=-60m if the total is less than the threshold, I don't want to displ... See more...
index=_internal | eventstats count by sourcetype | where count > 100 | timechart span=1m count by sourcetype note:earliest=-60m if the total is less than the threshold, I don't want to display the field. eventstats calculate all events , which is inefficient. Is there any other good way?
Hi, what is the best way to repopulate a csv with data from a search using curl but without using a username and password as I want to cron the search? Thanks
Hi, I am very new to Splunk. I am looking for a way to get windows logs into Splunk. I downloaded the Splunk forwarder but the issue is that this gives me gibberish logs. Example: "--splunk-... See more...
Hi, I am very new to Splunk. I am looking for a way to get windows logs into Splunk. I downloaded the Splunk forwarder but the issue is that this gives me gibberish logs. Example: "--splunk-cooked-mode-v3--\x00\x00\x00\x00\x00\x00\x00\x00\" I understood this is due to it being TCP but not being recognized as such and it needing to be configured in splunk itself as receiving from a Splunk fowarder ? But this is not allowed with a free license ? If anyone has a link explaining this, that would be a massive help, i would love to understand it way better. I apologize up front if this is a really silly question and the answer is obvious.
I have created few alerts which need to run only from Monday to Friday, but I have not been able to find a way to exclude Saturday and Sunday. Can anyone assist with this please?
Microsoft is planning to block basic auth to POP and IMAP in Oct 2020. Are there plans to change the authentication method to support modern auth?
I have currently tried to login using my standard splunk login information, unable to login. I have tried to log in to splunk.com first and then go the page using the link provided in the email in 2 ... See more...
I have currently tried to login using my standard splunk login information, unable to login. I have tried to log in to splunk.com first and then go the page using the link provided in the email in 2 different browsers. Does anyone know how to log in to the Security Datasets Project?
I've gotten both the aws add on and the aws app installed. THey're both installed on the heavy forwarded, and both installed on the search head, and the add on is not visible on the search head. I... See more...
I've gotten both the aws add on and the aws app installed. THey're both installed on the heavy forwarded, and both installed on the search head, and the add on is not visible on the search head. I've verified a bunch of data is making it into the index. But the overview panels populate, but not many of the more specific panels. Particularly interested in the security vpc reports, and while I see flow log data in the index, and can run the saved searches which DO generate tables of data, if I view the index they "collect" into, it's empty. I'm not sure how to track what's going on from this point. I inherited this splunk setup fairly recently.
Using Java API and requesting a streaming export from Splunk a search like this: search index="client_ndx" sourcetype="client_source" (field1 = "*" ) | regex field1 != "val1|val2|val3" | f... See more...
Using Java API and requesting a streaming export from Splunk a search like this: search index="client_ndx" sourcetype="client_source" (field1 = "*" ) | regex field1 != "val1|val2|val3" | fields field1, field2,field3,field4 , _time|fields - _raw (NOTE: ending with "|fields - _raw") returns the labeled fields, but ending it without that exclusion fails with the following error: java.lang.RuntimeException: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[124683119,213] Message: JAXP00010004: The accumulated size of entities is "50,000,001" that exceeded the "50,000,000" limit set by "FEATURE_SECURE_PROCESSING". at com.splunk.ResultsReaderXml.getNextEventInCurrentSet(ResultsReaderXml.java:128) at com.splunk.ResultsReader.getNextElement(ResultsReader.java:87) at com.splunk.ResultsReader.getNextElement(ResultsReader.java:29) at com.splunk.StreamIterableBase.cacheNextElement(StreamIterableBase.java:87) at com.splunk.StreamIterableBase.access$000(StreamIterableBase.java:28) at com.splunk.StreamIterableBase$1.hasNext(StreamIterableBase.java:37) at com.insightrocket.summaryloaders.splunk.SplunkParser.run(SplunkParser.java:112) at java.lang.Thread.run(Thread.java:745) Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[124683119,213] Message: JAXP00010004: The accumulated size of entities is "50,000,001" that exceeded the "50,000,000" limit set by "FEATURE_SECURE_PROCESSING". at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:596) at com.sun.xml.internal.stream.XMLEventReaderImpl.nextEvent(XMLEventReaderImpl.java:83) at com.splunk.ResultsReaderXml.readSubtree(ResultsReaderXml.java:423) at com.splunk.ResultsReaderXml.getResultKVPairs(ResultsReaderXml.java:325) at com.splunk.ResultsReaderXml.getNextEventInCurrentSet(ResultsReaderXml.java:124) ... 7 more I specifically used the system.export to get a stream and bypass the maximum record count, but a change in the system now requires the use of the _raw field
We have the following health check about the server being up or down - | tstats latest(_time) as latest where index=os_solaris source="Unix:Uptime" by host | convert timeformat=" %b %d, %Y %H... See more...
We have the following health check about the server being up or down - | tstats latest(_time) as latest where index=os_solaris source="Unix:Uptime" by host | convert timeformat=" %b %d, %Y %H:%M:%S" ctime(latest) AS Last_Log | where latest < relative_time(now(), "-10m") | table Last_Log, host What would be a better way, that won't produce a false alert on 100 servers, as it happened today?
I'm working with dashboards and the goal is to show a bar graph panel that displays the counts for two different fields separately (2 bars per timespan) if possible. The data is from the same ... See more...
I'm working with dashboards and the goal is to show a bar graph panel that displays the counts for two different fields separately (2 bars per timespan) if possible. The data is from the same index...the actions field(action=blocked) and category field (category=221) I can build a visual for each individual field but having trouble combining the two. index=url_filter action=blocked login_id="$user$"|stats count by _time |bucket _time span=1h I have another field not exclusive to field [action=blocked] that id like to display as well. Any tips appreciated.