All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

(This is a continuation of https://community.splunk.com/t5/Splunk-Search/Creating-a-search-that-looks-up-values-from-one-logfile-and/td-p/565097) So what I'm trying to do is compare values across tw... See more...
(This is a continuation of https://community.splunk.com/t5/Splunk-Search/Creating-a-search-that-looks-up-values-from-one-logfile-and/td-p/565097) So what I'm trying to do is compare values across two different logs that have a unique ID (see above link for more info). The problem I have is that the one log runs on a cronjob every 5 minutes, while the other log only runs when required. I'm using the below code:   index=foo (sourcetype=test1 OR sourcetype=test2) host=A* (source="/data/stuff/logfile1.log" OR source="/data/stuff/logfile2.log") | eval coalesce(lastupdate_direc,file1ID) | stats values(lastupdate_time) as lastupdate_time, values(file1ID) as file1ID by host, ID | eval int_time=strptime(lastupdate_time, "%F %H:%M") | eval timenow=now() | eval diff_new=timenow-int_time | eval days_since=((diff_new-14400)/60/60/24) | table lastupdate_time host name ID days_since   As I'm trying to be nice to my indexer, I'm only looking at the past 15 minutes (because of time drift between the servers) so I get multiple lastupdate_time entries in the table and because of that, the days_since field will be left blank. I've tried using chart latest of the values, but I get no results. Am I just not outputting to the correct visualization function or would something other than table be better? Thanks!
Dear Splunk Community, I need some help fetching data from a source, then use the results as a searchparameter for different other searches  and put all of the results into one statistics table. I ... See more...
Dear Splunk Community, I need some help fetching data from a source, then use the results as a searchparameter for different other searches  and put all of the results into one statistics table. I have (names have been changed): One index : [myIndex] One host : [myHost] Source one : [/xxx/xxx/xxxxx/xxxx/log/xxxxxxx/*/*.log] Source two : [/yyy/yyy/yyyyy/yyyy/log/yyyyyyyyy/firstlogfile.log] Source three : [/zzz/zzz/zzzzz/zzzz/log/zzzzzzzz/seconflogfile.log] I have the following dashboard: In the upper left you see a statistics table. A RUNID is basically a timestamp. I have a lot of different files that contain the RUNID. I collect all the files and then display each RUNID only once (so duplicates are not shown) using the following query:   index="myIndex" host="myHost" source="/xxx/xxx/xxxxx/xxxx/log/xxxxxx/*/*.log" | eval source=replace(source,"^/xxx/xxx/xxxxx/xxxx/log/xxxxxx/","") | eval source=replace(source,"/.*","") | stats values(source) by source | fields source | rename source AS "RUNID" | sort RUNID desc   When I click any RUNID another search is performed in the upper right (DATA) statistics table. This has the following query:   index="myIndex" host="myHost" source="/xxx/xxx/xxxxx/xxxx/log/xxxxxx/$tokenRUNID$/*.log" CTJT* $meldingen$ | fields _time, _raw | rename _time AS "Datum" | convert timeformat="%m-%d-%Y %H:%M:%S" ctime(Datum) | eval _raw = replace(_raw,"^[^#]*#", "") | rename _raw AS "Sensor Activiteit" | sort Datum desc    In the bottom (center) I have 3 single value fields that show ERRORS, WARNINGS and INFO. For each I have the following code (with the exception of the field ERROR/WARN/INFO) :    index="myIndex" host="myHost" source="/xxx/xxx/xxxxx/xxxx/log/xxxxxx/$tokenRUNID$/*.log" CTJT* AND CASE("ERROR") | stats count   And at last I have a single value field showing the profile:   index="myIndex" host="myHost" source="/yyy/yyy/yyyyy/yyyy/log/yyyyyyyyy/firstlogfile.log" OR source="/zzz/zzz/zzzzz/zzzz/log/zzzzzzzz/seconflogfile.log" $tokenRUNID$ "started with profile" | rex field=_raw "profile\s(?<whatever>[^\s\r]+)" | stats count by whatever | fields - count     So right now I have multiple data searches in different representations (single value fields, data tables etc.). I would like to create the following: A table with RUNID's where the table also shows the PROFILE, DATE and the WARNINGS, ERRORS and INFO counts right next to it. It should look like this, but populated: So I basically want to place multiple searches and results into one statistics table. I have tried playing with appendcols like described in this topic: https://community.splunk.com/t5/Splunk-Search/multiple-search-output-in-a-single-table-list-something/m-p/39644#M9063 But using index gives me the error  Unknown search command 'index'. How can I manage to get the above? Thanks in advance.    
Hi, Is trade-in value with Splunk Enterprise available when switching from a term perpetual license? Best regards, Pieter
I have a splunk forwarder setup in my server to forward all my logs to Splunk. I could not find a default message field in Splunk search. I would like the log string to be part of a specific field - ... See more...
I have a splunk forwarder setup in my server to forward all my logs to Splunk. I could not find a default message field in Splunk search. I would like the log string to be part of a specific field - 'message'. So, I decided to configure it in the splunk forwarder settings as follows: #/opt/splunkforwarder/etc/system/local/transforms.conf [message-transform] REGEX = * FORMAT = message::"$1" WRITE_META = true #/opt/splunkforwarder/etc/system/local/props.conf [host::*] TRANSFORMS-message = message-transform #/opt/splunkforwarder/etc/system/local/fields.conf [message] INDEXED=true But even with this configuration, I am not seeing the message field in my Splunk search. Am I missing anything?
Hello All, We added Webhook as one of the action item for Splunk report. But whenever search criteria is met we are not getting any hit on webhook. So we tried adding one more simple action which is... See more...
Hello All, We added Webhook as one of the action item for Splunk report. But whenever search criteria is met we are not getting any hit on webhook. So we tried adding one more simple action which is mail alert. Surprisingly I m receiving a mail once search criteria is met but no any hit on webhook. Webhook url is deployed in our internal network so are we supposed to open Firewall for Splunk cloud instance url? Also do Splunk team needs to add our internal IP any where? Please provide your valuable suggestion to get it worked. Please note that we are using Splunk cloud instance. Thanks in advance,    
I am trying "Prometheus Monitoring Extension" and facing FetchError.  FetchError: request to https://fra-ana-api.saas.appdynamics.com/events/schema/prometheus_events failed, reason: connect ETIMEDOU... See more...
I am trying "Prometheus Monitoring Extension" and facing FetchError.  FetchError: request to https://fra-ana-api.saas.appdynamics.com/events/schema/prometheus_events failed, reason: connect ETIMEDOUT 18.159.38.70:443 Prometheus Monitoring Extension Link: https://www.appdynamics.com/community/exchange/prometheus-monitoring-extension/
Identity: 314 assets are currently exceeding the field limits set in the Asset and Identity Management page. Data truncation will occur unless the field limits are increased. Sources: [merge].
Hi guys,    Probably very simple question but I just tangled myself in the logic.  I want to create 2 fields, one with today's date so I have that one | eval today_date=strftime(now(),"%B %d, %Y"... See more...
Hi guys,    Probably very simple question but I just tangled myself in the logic.  I want to create 2 fields, one with today's date so I have that one | eval today_date=strftime(now(),"%B %d, %Y") and the second one where I want to subtract 30days from that date. How do I get about it?
Hello there, In Cloud Splunk is there a way however an alert could be created for example: attacker logs in from London and the user is based in London, how do we identify them? As i know we can do... See more...
Hello there, In Cloud Splunk is there a way however an alert could be created for example: attacker logs in from London and the user is based in London, how do we identify them? As i know we can do this via Country level based on Geo Tagging of IPs but can we configure this to drill down at location level ? if so how? Many Thanks, Mozza
I can see that the Splunk App for CEF is announced EOL. Is a new app coming or has it been renamed or rebranded? Does anyone know of any other app that do the same job?   https://docs.splunk.com/... See more...
I can see that the Splunk App for CEF is announced EOL. Is a new app coming or has it been renamed or rebranded? Does anyone know of any other app that do the same job?   https://docs.splunk.com/Documentation/CEFapp/2.3.0/ReleaseNotes/Newfeatures
Hi, I'm trying out the new Splunk dashboard and the goal is to plot users' database document count over time. The log contains a JSON map with the top 100 users with most documents.  Since us... See more...
Hi, I'm trying out the new Splunk dashboard and the goal is to plot users' database document count over time. The log contains a JSON map with the top 100 users with most documents.  Since user doc-count differs over time the keys will also differ... "userDocuments": {    "userA":1836,    "userD":1197,    "userB":606,    "userZ":108062,    "userE":972,    "userC":931  } I'm having a hard time creating a simple table like this User Count userA 1836 userD 1197 userB 606 userZ 108062 userE 972 userC 931 Any input for a query/changing data structure?
I'm working on calculating the storage space taken up by a specific user. I would like to calculate the total size of their search artifacts at any given time - we would like to see if they are hitti... See more...
I'm working on calculating the storage space taken up by a specific user. I would like to calculate the total size of their search artifacts at any given time - we would like to see if they are hitting their storage limits regularly. This will indicate we need to increase the storage limit. I'm running this search to get the jobs for a specific user: | rest /services/search/jobs | search author=<user> The next thing to do would be to calculate the size of all jobs that have not expired yet. I can get the life of the search (ttl field) and when the search started (published field), and a sum of those should give the time the search expires. However, if I check the "Jobs" report, the "Expires" field does not correspond to what I calculate. There must be some additional factor involved in the calculation of the "Expired" field..?
Hi,  I have a log like this : 2021-09-01T07:25:12.314Z id-xxx-xxx-xxx STATE {"Id":"id-xxx-xxx-xxx","timestamp":"2021-09-01T07:25:12.145Z","sourceType":"my_sourcetype","source":"source_name","Type":... See more...
Hi,  I have a log like this : 2021-09-01T07:25:12.314Z id-xxx-xxx-xxx STATE {"Id":"id-xxx-xxx-xxx","timestamp":"2021-09-01T07:25:12.145Z","sourceType":"my_sourcetype","source":"source_name","Type":"my_type","event":{"field":"my_field"},"time":169,"category":"XXX"} My props.conf is like that : [extract_json] TRUNCATE = 999999 SHOULD_LINEMERGE=true NO_BINARY_CHECK=true TIME_PREFIX=timestamp: MAX_TIMESTAMP_LOOKAHEAD=10000 BREAK_ONLY_BEFORE ={$ MUST_BREAK_AFTER=}$ SEDCMD-remove-header = s/^[0-9T\:Z]*.*\s*{/{/g My issue is that I need to extract only the json element from my logs but with those parameters from my props I get a bad extraction : the end of my json ( {"field":"my_field"},"time":169,"category":"XXX"} ) goes to an other event line and is not in json. I have children brackets into parent bracket and I think my SEDCMD is not correct. I would have the entire json element in one event.  Can you help me please ? Thank you !
Hello  I have table  in my dashboard        ID Jan_Target Jan_Actual 1 50 60 2 0 N/A In similar way  for all months now i need a filter.If i select a january it should show b... See more...
Hello  I have table  in my dashboard        ID Jan_Target Jan_Actual 1 50 60 2 0 N/A In similar way  for all months now i need a filter.If i select a january it should show both Jan_Target and Jan_actual  My query |inputlookup ABC |table Id" Jan - Target" "Jan - Actual" "Feb - Target" "Feb - Actual" "Mar - Target" "Mar - Actual" "Apr - Target" "Apr - Actual" "May - Target" "May - Actual" "Jun - Target" "Jun - Actual" "Jul - Target" "Jul - Actual"... I tried  using search Jan*  but no results found  can you please help me with this?
Hi , I have a splunk app in a cluster environment which I need to hide from list of apps from the UI.  To do this, I have made the following changes in SH Deployer: [ui] is_visible = true Thi... See more...
Hi , I have a splunk app in a cluster environment which I need to hide from list of apps from the UI.  To do this, I have made the following changes in SH Deployer: [ui] is_visible = true This works fine, however, I can access the dashboard of this app, if I have the url handy (which is OKAY), the only problem here is, If I try to go to search url of the app, it gives 404 error. Any idea or leads on how to search index /navigate to search window of the app. TIA!
Hey, We do have Sysmom installed on our Windows servers and workstations A quick description of what sysmon is from docs.microsoft.com (link) "sysmon is a Windows system service and device drive... See more...
Hey, We do have Sysmom installed on our Windows servers and workstations A quick description of what sysmon is from docs.microsoft.com (link) "sysmon is a Windows system service and device driver that, once installed on a system, remains resident across system reboots to monitor and log system activity to the Windows event log."   Since Sysmon itself does not offer as a product log analysis, we thought that sending the logs into Splunk would be the right solution here * A disadvantage of this is the need to set up and maintain dozens of Splunk UF's on workstations.... What application do I should install, If we already have deployed Splunk TA Windows? We have found App & addon for sysmon in splunkbase. However, the data Sysmon generates is most of the time windows event logs and perfmon events, so the events would come into indexes generated by the Splunk TA windows. We do not want the information being doubled and collected in both apps under different sourcetypes and indexes.... Have anyone done this before? What's are your reccomendations? Regards, Tankwell
please suggest the add on used to get pharse data for the following devices,   1. Cisco Switches 2. Cisco WLC 3. Cisco Routers 4. Cisco Firewalls/IPS 5. windows AD 6. Windows DNS ... See more...
please suggest the add on used to get pharse data for the following devices,   1. Cisco Switches 2. Cisco WLC 3. Cisco Routers 4. Cisco Firewalls/IPS 5. windows AD 6. Windows DNS 7. Windows DHCP 8. Cisco VOIP 9. Cisco Umberlla 10. Cisco EDR 11. SAP ERP 12. Cloud services - Azure
Hi Guys, I installed Splunk add-on for apache web server on my UF and configured as per the documentation. I am able to see logs in my indexer but facing issue with the "tags". Only "web" and "erro... See more...
Hi Guys, I installed Splunk add-on for apache web server on my UF and configured as per the documentation. I am able to see logs in my indexer but facing issue with the "tags". Only "web" and "error" tags are being generated. No data is displayed when i run data validation search: tag=web tag=inventory tag=activity sourcetype=apache:access OR tag=web tag=inventory tag=activity sourcetype=apache:error   Below are the configuration files : cd /opt/splunkforwarder/etc/apps/Splunk_TA_apache cat inputs.conf -bash-4.2$ cat inputs.conf [monitor:///var/log/httpd/error_log*] sourcetype=apache:error index=webserver disabled = 0 [monitor:///var/log/httpd/access_log*] sourcetype=apache:access:kv index=webserver disabled = 0 I have only one config file "inputs.conf" in the above path. NOTE: I need this app to work fine in order to use it with Splunk ITSI web server module. PLEASE HELP!
Why should data models all be accelerated? What about the built-in Data Models?
Iam having two dashboards, is there a way to display a particular panel of second dashboard through the hyperlink specified in first dashboard? For example, if I click a hyperlink named "more exampl... See more...
Iam having two dashboards, is there a way to display a particular panel of second dashboard through the hyperlink specified in first dashboard? For example, if I click a hyperlink named "more examples" on first dashboard it should navigate to "example panel" of second dashboard. It will be really helpful if anyone provide me a solution on this!