All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

The details tab on our add-on page (Administrator tools > Manage App > Details) says it supports Markdown. We have added images to the content but it does not render. The syntax is inline with https:... See more...
The details tab on our add-on page (Administrator tools > Manage App > Details) says it supports Markdown. We have added images to the content but it does not render. The syntax is inline with https://daringfireball.net/projects/markdown/syntax#img. Is there anything else that I am missing?
My search produced the following CSV: Date Page_1 Page_2 Page_3 Page_4 Page_5 Page_6.... 1-Jan 1 2 3 4 5 6 2-J... See more...
My search produced the following CSV: Date Page_1 Page_2 Page_3 Page_4 Page_5 Page_6.... 1-Jan 1 2 3 4 5 6 2-Jan 10 20 3 4 5 6 .. .. .. 22-Apr 100 200 3000 7654 86895 76476 How can I calculate the difference between the first and the last row for every page? Please help.
I want to place an image in one panel. Below you can see the thumbs up image that I am trying to use. For testing purposes please use this image https://raw.githubusercontent.com/rajapatra/Te... See more...
I want to place an image in one panel. Below you can see the thumbs up image that I am trying to use. For testing purposes please use this image https://raw.githubusercontent.com/rajapatra/Test/master/bg.jpg Thanks in advance.
I am trying to fetch the latest 50 events by _time . This is the search I used, putting required columns in a table: index=intlims country!=US appSection!=DSIM* AND appSection!=Features AND ... See more...
I am trying to fetch the latest 50 events by _time . This is the search I used, putting required columns in a table: index=intlims country!=US appSection!=DSIM* AND appSection!=Features AND appSection!=Price* services{}.name!="EvaluateIMSRule" AND services{}.name!="printShelfLabel" | rename services{}.successful as servicesSuccessful, services{}.name as servicesName | table country, site, servicesName, servicesSuccessful, _time There are only a few items in the search results, but there are at least a million events. country site servicesName servicesSuccessful _time MX 02731 GetProductLocations true 2020-04-22 16:20:53 CA 03739 GetItemInfoForBin true 2020-04-22 16:20:53 MX 03866 GetProductLocations true 2020-04-22 16:20:53 UK 04844 GetProductLocations true 2020-04-22 16:20:53 NI 04396 GetProductLocations true 2020-04-22 16:20:53 Expected results are shown below, fetching the latest 50 events by time per servicesName, site and country: country site servicesName servicesSuccessful _time MX 02731 GetProductLocations true 2020-04-22 16:20:53 MX 02731 GetProductLocations true 2020-04-22 16:18:53 MX 02731 GetProductLocations false 2020-04-22 16:16:53 MX 02731 GetProductLocations true 2020-04-22 16:15:53 MX 02731 GetProductLocations true 2020-04-22 16:00:53 CA 03739 GetItemInfo true 2020-04-22 16:20:53 CA 03739 GetItemInfo true 2020-04-22 16:19:53 CA 03739 GetItemInfo false 2020-04-22 16:18:53 CA 03739 GetItemInfo true 2020-04-22 15:20:53
Hello, I've installed the Linux Auditd app https://splunkbase.splunk.com/app/2642/ and I'm pulling auditd logs into Splunk. Once a user goes into root "su root", the events are logged by the app ... See more...
Hello, I've installed the Linux Auditd app https://splunkbase.splunk.com/app/2642/ and I'm pulling auditd logs into Splunk. Once a user goes into root "su root", the events are logged by the app but I can't group the events together with the corresponding user. Other questions about the app suggest this should be done out of the box, so I guess there is something I need to change. I am interested in results similar to when we run "ausearch -ui" where the elevated "su -" events are grouped together with sudo and regular events by a/uid. Any advice or links to knowledge are appreciated! Thank you, -Sean I have run through the configure option within the app and everything seems to be populating correctly. --------------- sample log ----------------- 2020-04-20T14:13:28.243-0700 type=PROCTITLE msg=audit(1587417208.243:33451): proctitle=636861747472002B61002F726F6F742F2E626173685F686973746F72792E616C696D6D2D61 2020-04-20T14:13:28.243-0700 type=PATH msg=audit(1587417208.243:33451): item=1 name="/lib/ld64.so.1" inode=155 dev=fe:00 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL 2020-04-20T14:13:28.243-0700 type=PATH msg=audit(1587417208.243:33451): item=0 name="/usr/bin/chattr" inode=405135 dev=fe:00 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL 2020-04-20T14:13:28.243-0700 type=CWD msg=audit(1587417208.243:33451): cwd="/root" 2020-04-20T14:13:28.243-0700 type=EXECVE msg=audit(1587417208.243:33451): argc=3 a0="chattr" a1="+a" a2="/root/.bash_history.ali12" 2020-04-20T14:13:28.243-0700 type=SYSCALL msg=audit(1587417208.243:33451): arch=80000016 syscall=11 success=yes exit=0 a0=1ca3b8d0 a1=1ca4dd20 a2=1ca4a0b0 a3=3ffa0d79710 items=2 ppid=55675 pid=55721 auid=1503413 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=602 comm="chattr" exe="/usr/bin/chattr" key=(null) 2020-04-20T14:13:28.243-0700 type=PROCTITLE msg=audit(1587417208.243:33450): proctitle="logname" 2020-04-20T14:13:28.243-0700 type=PATH msg=audit(1587417208.243:33450): item=1 name="/lib/ld64.so.1" inode=155 dev=fe:00 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL 2020-04-20T14:13:28.243-0700 type=PATH msg=audit(1587417208.243:33450): item=0 name="/usr/bin/logname" inode=405524 dev=fe:00 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL 2020-04-20T14:13:28.243-0700 type=CWD msg=audit(1587417208.243:33450): cwd="/root" 2020-04-20T14:13:28.243-0700 type=EXECVE msg=audit(1587417208.243:33450): argc=1 a0="logname" 2020-04-20T14:13:28.243-0700 type=SYSCALL msg=audit(1587417208.243:33450): arch=80000016 syscall=11 success=yes exit=0 a0=1ca38c60 a1=1c912b90 a2=1ca4a0b0 a3=3ffa0d79710 items=2 ppid=55719 pid=55720 auid=1503413 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=602 comm="logname" exe="/usr/bin/logname" key=(null) 2020-04-20T14:13:39.293-0700 type=PROCTITLE msg=audit(1587417219.293:33452): proctitle=617564697463746C002D6C 2020-04-20T14:13:39.293-0700 type=PATH msg=audit(1587417219.293:33452): item=1 name="/lib/ld64.so.1" inode=155 dev=fe:00 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL 2020-04-20T14:13:39.293-0700 type=PATH msg=audit(1587417219.293:33452): item=0 name="/sbin/auditctl" inode=420444 dev=fe:00 mode=0100750 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL 2020-04-20T14:13:39.293-0700 type=CWD msg=audit(1587417219.293:33452): cwd="/root" 2020-04-20T14:13:39.293-0700 type=EXECVE msg=audit(1587417219.293:33452): argc=2 a0="auditctl" a1="-l" 2020-04-20T14:13:39.293-0700 type=SYSCALL msg=audit(1587417219.293:33452): arch=80000016 syscall=11 success=yes exit=0 a0=1ca5f0b0 a1=1ca5e8f0 a2=1ca778a0 a3=3ffa0d79710 items=2 ppid=55675 pid=55727 auid=1503413 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=602 comm="auditctl" exe="/usr/sbin/auditctl" key=(null)
I need to write a custom alert that generates an email notification when a file is not present. I currently have a daily alert (9pm) set up to notify via email if a .error file is present or not.... See more...
I need to write a custom alert that generates an email notification when a file is not present. I currently have a daily alert (9pm) set up to notify via email if a .error file is present or not. If the file is present an email including the file is generated. Now I want to set up a custom email message that is generated when the file is not present. For example, "No file was generated so there is no file attached".
Here is the raw event log: Apr 22 08:04:46 10.14.10.66 1 2020-04-22T08:04:47-07:00 connect.abcd.com PulseSecure: - - - 2020-04-22 08:04:47 - ive - [10.12.6.240] sramachandran(VPNUsers)[] - Login ... See more...
Here is the raw event log: Apr 22 08:04:46 10.14.10.66 1 2020-04-22T08:04:47-07:00 connect.abcd.com PulseSecure: - - - 2020-04-22 08:04:47 - ive - [10.12.6.240] sramachandran(VPNUsers)[] - Login failed. Reason: No Roles Apr 22 08:04:46 10.14.10.66 1 2020-04-22T08:04:47-07:00 connect.abcd.com PulseSecure: - - - 2020-04-22 08:04:47 - ive - [10.12.6.240] sramachandran(VPNUsers)[] - Login failed from 10.12.6.240 for sramachandran/VPNUsers. All roles restricted. I would like to extract only the username (ex: sramachandran in this case) to a field called "UserName" . Can you please help me achieve this? Thanks in advance.
Is there a way to show the source for an event in the results for a search? I am wanting to see the complete source for every row of the results. Put simply the information you would see for a select... See more...
Is there a way to show the source for an event in the results for a search? I am wanting to see the complete source for every row of the results. Put simply the information you would see for a selected row when you manually click into event by "Event actions --> Show Source" and get a fresh page that shows the source info. I'm looking to see that info for every row on the screen. I understand how to 'Pick one row, then click that dropdown to see the source'. Please note the below does not answer the question. The answer below tells how to 'for each' manually see the source for one event (the one you click into): https://answers.splunk.com/answers/289234/how-to-display-the-entire-source-under-each-event.html Thanks! Will-
I have set splunk to ingest the /var/log directory. On this particular host, I go to filter by "source", and only see 2 sources: /var/log/messages /var/log/maillog Why is it not seeing o... See more...
I have set splunk to ingest the /var/log directory. On this particular host, I go to filter by "source", and only see 2 sources: /var/log/messages /var/log/maillog Why is it not seeing other files and folders? For example, there is /var/log/audit/audit.log.
Hi , I am looking to get a data in the format from tomcat access logs for particular time span.
We are using a Horizon View 7 connection server to manage desktop virtual machines in multiple domains. We are using a single-instance Splunk Enterprise Server, with Splunk Universal Forwarders sendi... See more...
We are using a Horizon View 7 connection server to manage desktop virtual machines in multiple domains. We are using a single-instance Splunk Enterprise Server, with Splunk Universal Forwarders sending the data. All Horizon desktop pools are using Instant Clones method and are all based on snapshots of a single "Gold Image". The desktop pools have different naming conventions, but the VMs are named with consistent prefixes within their own pools. For example: devel-01, devel-02, etc. for the "devel" pool in the "devel" domain prod-01, prod-02, etc. for the "prod" pool in the "prod" domain For all other applications, this has been a great way to reduce administrative overhead, but Splunk Universal Forwarder is giving me fits. I need both the hostname AND the index to be dynamic. I have tried configuring the Gold Image with the "splunk clone-prep-clear-config" command, but that only affects hostname. I want these desktops to send data to pools specifically for their domains. Basically, I want to end up with an $SPLUNK_HOME/etc/system/local/inputs.conf that dynamically assigns the host name and index values based on which desktop it is running on. Something that looks like: [default] host = domain-number index = domain Can the Splunk Universal Forwarder be incorporated into a clone image in this manner?
I have a Splunk cluster of 1 Search Head, 1 Index Master, and 2 Indexers in a cluster. I need to move all 4 to new hardware. What's the best way to do this?
Hi all, I am facing a strange issue while working on custom app in Splunk together with a couple more fellow developers. We are using GIT as our app/code repository and in the world of Splunk the ... See more...
Hi all, I am facing a strange issue while working on custom app in Splunk together with a couple more fellow developers. We are using GIT as our app/code repository and in the world of Splunk the majority of the content there is taken by *.conf files, e.g. savedsearches.conf and macros.conf. Initially we were doing just fine with developing in parallel, but recently we found out that when doing changes through the GUI of Splunk, e.g. when changing the SPL of a saved search, the stanza for that saved search ends up being moved at the end of the savedsearches.conf file. This happens every time something is changed and it causes a lot of complex merge conflicts in our repo. Is there a way to tell Splunk NOT to move latest updates at the end of the *.conf file OR is there a solution within GIT to handle these merge conflicts better? Thank you in advance!
I have a multiselect option in my dashboard that defines regex number ranges. I want to then group the "selected" number range elements and compare those against the ones that weren't selected. To ... See more...
I have a multiselect option in my dashboard that defines regex number ranges. I want to then group the "selected" number range elements and compare those against the ones that weren't selected. To begin my number range elements look like this: (852[01]) (4[0-4][0-9]{2}|45[0-7][0-9]|458[01]) (850[0-9]|851[0-3]) (831[4-9]|83[2-9][0-9]|84[0-5][0-9]|846[0-7]) Note, these are regex ranges. My search contains the field value to compare the regex against, Message{}.itemnumber = itemNumber is the field value to check against regex number ranges I tried | eval SELECTED=case(itemNumber=$ITEM_NUMBER$, Selected, 1=1, Unselected) Any suggestions on how to accomplish this?
My events are JSON based and look like this one: { "severity": "DEBUG", "message": { "list": [ [ "email1@some.domain", "2020-04-21 14:27:19" ], [ ... See more...
My events are JSON based and look like this one: { "severity": "DEBUG", "message": { "list": [ [ "email1@some.domain", "2020-04-21 14:27:19" ], [ "email2@some.domain", "2020-04-21 16:48:43" ] ] }, "Type": "someType" } The structure is not very good to perform operations with data at message.list , but I have to work with this one. The emails at message.list field are unique for each event. My task is to calculate the number of all unique email addresses for each type ( message.Type field) for all events I got with search. I was able to calculate the number of emails for each type, but not unique email addresses. This is my search: someMySearchConditions | spath | rename "message.list{}{}" as rows | rex field=rows max_match=0 "(?<email>[\w\d\.\-]+\@[\w\d\.]+)" | eval numberOfEmails = mvcount(email) | search numberOfEmails > 0 | stats sum(numberOfEmails) by message.Type I tried to convert field multivalue email with help of split function to multiple event with a single value field but without any success (cleaned email addresses are separated by carriage return or something like this). How I can "join" all emails (or maybe value of message.list field) to be able to get data in format where n is a number of unique email addresses for each type? count | Type n | someType1 Thank you.
Hi, Am looking for conditional eval search for my results, could you please help me with correct query. index=myindex source="access" api="sendverification?email=*&op=passwordrecovery" OR api="... See more...
Hi, Am looking for conditional eval search for my results, could you please help me with correct query. index=myindex source="access" api="sendverification?email=*&op=passwordrecovery" OR api="*reset-password.html" OR api="*updatepassword" response_code=200 OR response_code=400 | eval API=case(api LIKE "%op=passwordrecovery", "Request Password Reset", api LIKE "%reset-password.html", "Email Link", api LIKE "%updatepassword", "updatepassword") | chart count by API but api=updatepassword has response_code 200(success) and 400(failure), i need to display api updatepassword, response_code 200 as success and api updatepassword, response_code 400 as failure as result i need display all four api account with count like Date, API, Count 2020-04-22, Request Password Reset, 10 2020-04-22, Email Link, 5 2020-04-22, Success, 9 2020-04-22, Failure, 1 Thanks! Pavan
I have an issue where a field in the database I connect to has quotation characters. Previously I have used Replace function in the sql editor: Replace ("Closure Text", '"', '%') AS "Closure Text"... See more...
I have an issue where a field in the database I connect to has quotation characters. Previously I have used Replace function in the sql editor: Replace ("Closure Text", '"', '%') AS "Closure Text", this works for Oracle SQL database. When I come to use the same command for an MS SQL database I get an error: "java.lang.IllegalStateException: Column name conflicted, please set shortnames option to false and retry" The query below works if I give new field names but I really want to use the same field names. SELECT *, isnull(Workgroup, 'Blank') as Workgroup2, Replace (last_action, '"', '%') AS last_action, Replace (Problem, '"', '%') AS Problem FROM "OSMQ"."dbo"."OSMQ_TicketData" where (Ticket_Status='Cancelled'or Ticket_Status='Closed' or Ticket_Status = 'Solved/Not Closed') and (Date_Closed >= CONVERT(DATETIME, CONVERT(DATE, getdate()))-1 and Date_Closed < CONVERT(DATETIME, CONVERT(DATE, getdate()))); Any advice would be greatly appreciated. I am using SPLUNK Enterprise 8.0.3 & DBConnect 3.2.0
Hi, I am looking to merge 2 values of a multi valued fields and put it in a table. For example my current query is extracting data like this - Business Exception while rescheduling order 1103 ... See more...
Hi, I am looking to merge 2 values of a multi valued fields and put it in a table. For example my current query is extracting data like this - Business Exception while rescheduling order 1103 ICOMS Business Exception. 1103 Business Exception 665 Now if you see 1st and 2nd row are actually same event because the description tag which i am extracting comes twice sometimes. I have tried combining them using mvcombine and nomv but then how would i extract _time fields as well. Query - index=my_prod source="/var/log/test.txt" mywebservice (error OR exception OR critical OR failure) | rex field=_raw "description>(?[^<]+)" max_match=2 | stats count by Exception | sort - count | mvcombine delim=" | " Exception | nomv Exception What i want to do is also add _time field in the table and that event should come as a single row with count as 1. Let me know if someone can help.
I am running Splunk Enterprise 8.0.2 on my Windows 10 PC (at home obviously [COVID-19 etc]) and using Chrome 81. I am UK based and everything is configure for the UK. I am pulling in log files in wh... See more...
I am running Splunk Enterprise 8.0.2 on my Windows 10 PC (at home obviously [COVID-19 etc]) and using Chrome 81. I am UK based and everything is configure for the UK. I am pulling in log files in which each line starts with date and time in UK format, e.g. 2020-03-27 00:04:18 Syncapp 01234 info [varying rubbish] : Started sync 'SyncGroup01::SyncJob11' Whenever I search for these events the 'Time' field always gives the date in US style, as seen below. Changing the "Preferences->Timezone" setting doesn't seem to have any impact. Is there a way to make Splunk understand that I am in the UK and display UK style dates by default?
I don't know if data model:Containers are on Splunk's road map. or if there's a official data model that supports the containers logs(kerbernetes, dockers)?