All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello fine Splunk folks,   We have 10x Cloud Connectors which function as the DDC and BrokerAgent. The Splunk UF is installed as Citrix admin but in order to run the PS it must authenticate with C... See more...
Hello fine Splunk folks,   We have 10x Cloud Connectors which function as the DDC and BrokerAgent. The Splunk UF is installed as Citrix admin but in order to run the PS it must authenticate with Citrix Cloud with MFA. As such, we can't get any scripted data from the TA-XD7-Broker and thus the app doesn't work.  I can see the scripts running but we just never receive any data, I assume it just hangs and times out. Are there options here to enable Splunk to run this script or collect the data other ways? Thanks!  
  Are there any built-in Health dashboards which can provide us   1. The usage stats by users.   2. # of concurrent sessions running over a day timeline   3. Usage of each dashboard, ad-hoc Searc... See more...
  Are there any built-in Health dashboards which can provide us   1. The usage stats by users.   2. # of concurrent sessions running over a day timeline   3. Usage of each dashboard, ad-hoc Search thanks.  
We recently moved from a stand-alone ES splunk search head to a clustered splunk ES search head, and we've started to see doubling, and in some cases tripling up of some of our correlation search res... See more...
We recently moved from a stand-alone ES splunk search head to a clustered splunk ES search head, and we've started to see doubling, and in some cases tripling up of some of our correlation search results where we've configured throttling that we have not seen on the stand-alone machine.  Scenario: correlation search scheduled to run 23 minutes after the hour every 6 hours. search looks back 24 hours to now().   Throttling is set to 1 day.  Search runs, generates notable events.  12 hours later, search generates notable for the same events it found in the first run, implying that the search likely ran once on the same search head, and on a different search head the second time.   Is there a way to confirm that all search heads have the same criteria for what should be throttled and for how long?    
How do I check if Splunk is ingesting logs from a certain host /server or type of logs received? I need to validate if a certain server / host is sending data to Splunk please? Thank u in advance.
When I run this filter, I want splunk to give me a count per day based on the Month, day, and year of value  sys_created_on instead of any ticket that may have been touched that day, but created on a... See more...
When I run this filter, I want splunk to give me a count per day based on the Month, day, and year of value  sys_created_on instead of any ticket that may have been touched that day, but created on another day. What am I doing wrong!?  I'm teaching myself and have at least made it this far.  I just want to get a count per day for the sys_created_on value. I tell it to give me per day values in the time picker and it give me incidents that were not created on that day.        SEARCH | dedup dv_number | table sys_created_on dv_number dv_u_username_id assignment_group_name dv_assigned_to dv_u_workstation_ci dv_cmdb_ci dv_u_location_1 description dv_close_notes u_last_3_worknotes dv_close_notes | rename sys_created_on AS "Created On", dv_number AS "Incident Number", dv_u_username_id AS "Username", assignment_group_name AS "Assignment Group", dv_assigned_to AS "Assigned to", dv_u_workstation_ci AS "Workstation ID", dv_cmdb_ci AS "CI" dv_u_location_1 AS "Location", description AS "Description", dv_close_notes AS "Closing Notes", u_last_3_worknotes AS "Last 3 Work Notes", dv_close_notes AS "Closing Notes" | sort by "Incident Number" desc      
Good Afternoon, I am working on a coalesce query that looks like this:  | makeresults | eval Name="John", NAME="Johnny", name="john" | eval New_Name=coalesce("Name:;"+Name, "NAME:;"+NAME, "name:;... See more...
Good Afternoon, I am working on a coalesce query that looks like this:  | makeresults | eval Name="John", NAME="Johnny", name="john" | eval New_Name=coalesce("Name:;"+Name, "NAME:;"+NAME, "name:;"+name) | rex mode=sed field=New_Name "s/;/\n/g" | table New_Name The output I get is exactly what I'm looking for but is there a way I can simplify this? I'm trying to have the table display the field name with the coalesced value without having to type in the string before the value (ie:  "Name:;"+Name). Any help would be greatly appreciated. Respectfully, Jayson
I am trying to have a threshold line that changes based on the date range. More specifically I am trending mainframe consumption over a period of time, identifying averages by columns and peaks hori... See more...
I am trying to have a threshold line that changes based on the date range. More specifically I am trending mainframe consumption over a period of time, identifying averages by columns and peaks horizontally by line(as an overlay).  The machine's capacity has changed over the years and want to display that. Struggling to determine the correct syntax: |eval <field>=if(<field>... Would have a "dateA to current" = x, "dateB to date A" = y, "dateC to dateB" = z Somewhere along those lines.  Thoughts?
Hello Everyone I hope you are having a great day, This new dashboaard studio feature is GREAT 10/10 but I'm having a little of trouble when I  want a visualization to be linked to another dashboard ... See more...
Hello Everyone I hope you are having a great day, This new dashboaard studio feature is GREAT 10/10 but I'm having a little of trouble when I  want a visualization to be linked to another dashboard and also show the data within the same time period as the main dashboard,   What I did before was to set the earlist and latest time token to the one on the main dashboard...but in the new dashboard studio I cant seem to find that feature I rather have something called:      Currently using Global Time Range input $global_time.earliest$ - $global_time.latest$   so I already tried parshing this info in he URL of the desired link dashboard without any luck... I dont know if some of you know the way to have the new linked dashboard to show values in the same time period of the main dashboard using the new feature dashboard studio! THANK YOU SO MUCH
I recently installed SC4S. For most logs it works as expected; however, it is improperly indexing Juniper Netscreen as osnix with sourctype: nix:syslog. I've tried adding a filter to identify specifi... See more...
I recently installed SC4S. For most logs it works as expected; however, it is improperly indexing Juniper Netscreen as osnix with sourctype: nix:syslog. I've tried adding a filter to identify specific IPs as netscreen but it did not work. Any assistance is appreciated Example Raw Log: <133>Apr 19 20:06:42 172.#.#.2/172.#.#.2 SC-NS1-SSG140: NetScreen device_id=SC-NS1-SSG140 [Root]system-notification-00257(traffic): start_time="2021-04-21 17:13:42" duration=0 policy_id=320001 service=tcp/port:8013 proto=6 src zone=Null dst zone=self action=Deny sent=0 rcvd=52 src=10.#.#.133 dst=10.#.#.1 src_port=53563 dst_port=8013 session_id=0 reason=Traffic Denied Splunk Results SC-NS1-SSG140: NetScreen device_id=SC-NS1-SSG140 [Root]system-notification-00257(traffic): start_time="2021-04-21 17:13:42" duration=0 policy_id=320001 service=tcp/port:8013 proto=6 src zone=Null dst zone=self action=Deny sent=0 rcvd=52 src=10.#.#.1 dst=10.#.#.133 src_port=53563 dst_port=8013 session_id=0 reason=Traffic Denied host = 172.#.#.2/172..#.#.2 index = osnix sc4s_fromhostip = 172.#.#.150 sc4s_syslog_facility = user sc4s_syslog_format = rfc3164 sc4s_vendor_product = nix_syslog source = program:SC-NS1-SSG140 sourcetype = nix:syslog
I have Windows Event Forwarding Configured and have installed a Universal Forwarder to send events to a Heavy Forward which then sends them on to the Indexers. I only have a basic configuration on th... See more...
I have Windows Event Forwarding Configured and have installed a Universal Forwarder to send events to a Heavy Forward which then sends them on to the Indexers. I only have a basic configuration on the UF but I need to override a couple of fields such as computer name and index etc. I have setup the input.conf, props.conf and transforms.conf as detailed here https://community.splunk.com/t5/Getting-Data-In/how-can-we-split-forwarded-windows-event-logs-by-host/m-p/61463 on the HF but the configuration seems to get ignored. I have also simplified this to just having the following in the input.conf on the HF but it makes no difference as events still go to the main index. [WinEventLog://ForwardedEvents] index=winevtlog disabled = 0  
Hi all, I have hundreds of UF (universal forwarders) setup and sending wineventlogs to our splunk cloud instance. There is a requirement to also send the wineventlogs to our parent company, but the... See more...
Hi all, I have hundreds of UF (universal forwarders) setup and sending wineventlogs to our splunk cloud instance. There is a requirement to also send the wineventlogs to our parent company, but they have logrhythm. I have setup a separate app under apps/logrhythm and it's successfully sending data to both splunk cloud and the logrhythm collector. The log rhythm collector sadly can't parse the data, so tried to change the it to XML via the renderXML directive via the below. C:\Program Files\SplunkUniversalForwarder\etc\apps\logrhythm\default\inputs.conf     [WinEventLog://Application] index = wineventlog renderXml = true disabled = 0 [WinEventLog://Security] index = wineventlog renderXml = true disabled = 0 [WinEventLog://System] index = wineventlog renderXml = true disabled = 0     C:\Program Files\SplunkUniversalForwarder\etc\apps\logrhythm\default\inputs.conf     [tcpout] defaultGroup = logrhythm,splunkcloud [tcpout:logrhythm] server = <Servername>:514 sendCookedData = falsecompressed = false dnsResolutionInterval = 60       Sadly, this also sends XML format windows event logs to our splunk instance in the cloud - this completely mangles it and doesn't match all our other data sent with wineventlog What is the best way to send wineventlog data, as set previously, to splunk cloud and XML wineventlog data to logrhythm??  
Hi, I am trying to search across two seperate indexes and then display fields returned from both indexes on a single line of my output.  Both indexes have a common field named "user" and I am searc... See more...
Hi, I am trying to search across two seperate indexes and then display fields returned from both indexes on a single line of my output.  Both indexes have a common field named "user" and I am search both indexes using this field. The first part is "index=mcafee_wg user= supplied value"  I want to search this  index for a given value for "user" field and to display the value of a field named "url" in my output. "url" is a field in this index. I also want to search a different index with "index=cisco_fmc user= supplied value"  As above, I want to search this index for a given value for "user" field. From this index I want to display the value of a field named "detection" which is a field in this index. So basically i want to combine these three fields together and output them on the same line, such as: user       url           detection value      value     value Thanks!
Users have been complaining they were not getting email alerts.  While troubleshooting this issue I noticed the alerts were also not being written to the triggered alerts area even though that action... See more...
Users have been complaining they were not getting email alerts.  While troubleshooting this issue I noticed the alerts were also not being written to the triggered alerts area even though that action is selected in the alert.  I am able to send email notifications using this SPL:  index=_internal | stats count by host | top 1 host | sendemail to="merzinger@test.com" sendresults=true To help troubleshoot this some more I created a very simple alert with this SPL: index=_internal | stats count by host     The search is set to lookup back 15 minutes and the CRON schedule is set for * * * * * to run every minute.  The action for this alert is just to add the event to the Triggered Alerts if results > 0.  This search definitely returns results but the alert actions don't seem to be triggering.  Any help would be appreciated.
We are wanting to cut down on the amount of data that is going to Splunk from our Palo Alto Firewalls. In order to do that, we want to trim the unnecessary data from the logs but still have it parse ... See more...
We are wanting to cut down on the amount of data that is going to Splunk from our Palo Alto Firewalls. In order to do that, we want to trim the unnecessary data from the logs but still have it parse correctly in Splunk. When we create the custom log format it will no longer be recognized as PAN:Traffic, instead it is being parsed as PAN:Firewall. We used the custom format from Palo Altos website and included the commas where they were supposed to go. BTW this is configured on Panorama in syslog settings.  Before: From Palo Alto WebSite FUTURE_USE, Receive Time, Serial Number, Type, Threat/Content Type, FUTURE_USE, Generated Time, Source Address, Destination Address, NAT Source IP, NAT Destination IP, Rule Name, Source User, Destination User, Application, Virtual System, Source Zone, Destination Zone, Inbound Interface, Outbound Interface, Log Action, FUTURE_USE, Session ID, Repeat Count, Source Port, Destination Port, NAT Source Port, NAT Destination Port, Flags, Protocol, Action, Bytes, Bytes Sent, Bytes Received, Packets, Start Time, Elapsed Time, Category, FUTURE_USE, Sequence Number, Action Flags, Source Location, Destination Location, FUTURE_USE, Packets Sent, Packets Received, Session End Reason, Device Group Hierarchy Level 1, Device Group Hierarchy Level 2, Device Group Hierarchy Level 3, Device Group Hierarchy Level 4, Virtual System Name, Device Name, Action Source, Source VM UUID, Destination VM UUID, Tunnel ID/IMSI, Monitor Tag/IMEI, Parent Session ID, Parent Start Time, Tunnel Type, SCTP Association ID, SCTP Chunks, SCTP Chunks Sent, SCTP Chunks Received, Rule UUID, HTTP/2 Connection, App Flap Count, Policy ID, Link Switches, SD-WAN Cluster, SD-WAN Device Type, SD-WAN Cluster Type, SD-WAN Site, Dynamic User Group Name What we want: ,$receive_time,,$type,$subtype,,$time_generated,$src,$dst,$natsrc,$natdst,$rule,$srcuser,$dstuser,$app,,$to,$from,$inbound_if,$outbound_if,,,,$repeatcnt,$sport,$dport,$natsport,$natdport,$flags,$proto,$action,$bytes,$bytes_sent,$bytes_received,$packets,,,$category,,$seqno,,$srcloc,$dstloc,,$pkts_sent,$pkts_received,$session_end_reason,,,,,,$device_name,$action_source,,,,,,,,,,,,,,,,,,,,,, We have even captured packets and compared what we are getting with what is expected and they seem to match up. Not sure what is wrong, but would love some help. Not sure Palo Alto will help, though we did submit a ticket to them. Splunk closed my ticket because the APP is "Vendor Supported". Any advice on doing this or any other suggestions to how anyone else is doing Palo Alto logs? Thanks!!!
Generate a alert when the http status field change from 500 to 200. There are some responsecode 502,so success rate responsecode 200 very high .So we want the first success responsecode 200 ,after 500
How do I properly configure the Splunk app. Alerts for Splunk Admins? Any demo / training link is appreciated. Many of the reports are blank. How do I configure it so it will produce some cool useful... See more...
How do I properly configure the Splunk app. Alerts for Splunk Admins? Any demo / training link is appreciated. Many of the reports are blank. How do I configure it so it will produce some cool useful data?
I like to ask how do I set up Splunk as a SIEM in my on-prem network architecture. Does it connect to the switch that connects all pcs ? i intend using an appliance server that has the Splunk app ins... See more...
I like to ask how do I set up Splunk as a SIEM in my on-prem network architecture. Does it connect to the switch that connects all pcs ? i intend using an appliance server that has the Splunk app installed In a nutshell I need help in setting up an enterprise version of Splunk in our network systems. secondly, are the sensors for each system, any caveats on the windows firewall ? Secondly , can I place IDP, IDS before the firewall or after the firewall
How do I configure Splunk app Meta Woot! please to produce reports? Any demo / training link you can share please? Most of the reports are blank. Looking for any demo / training for the Meta Woot! ap... See more...
How do I configure Splunk app Meta Woot! please to produce reports? Any demo / training link you can share please? Most of the reports are blank. Looking for any demo / training for the Meta Woot! app. please.
We are having a issue. Sometimes our input XML file is splint in to two. In the above image you can see, both are same files but last 6 lines are split in to another. So when we read the file u... See more...
We are having a issue. Sometimes our input XML file is splint in to two. In the above image you can see, both are same files but last 6 lines are split in to another. So when we read the file using 'spath' we are getting null value. Example file in correct format : <?xml version="1.0" encoding="UTF-8"?><message> <software-version>4.1.1810-65</software-version> <customer-job-id>722739-151801-NBS-CMC400-001-LT_Slit-Merge-NBS-001</customer-job-id> <submission> <submit-number>1</submit-number> <job-submission-id>722739-151801-NBS-CMC400-001-LT_Slit-Merge-NBS-001.s1</job-submission-id> <frame-inches-along-web-initial-value>10.850</frame-inches-along-web-initial-value> <frame-inches-across-web>17.000</frame-inches-across-web> <statistics> <current-copy/> <actual-linear-feet-used>3515.5</actual-linear-feet-used> <sides> <side-a> <frames-printed-ok>3844</frames-printed-ok> <frames-printed-error>0</frames-printed-error> </side-a> <side-b> <frames-printed-ok>3844</frames-printed-ok> <frames-printed-error>0</frames-printed-error> </side-b> </sides> <ink-usage> <units>liters</units> <sides> <side-a completed="true"> <fixer>0.004482</fixer> <black>0.01374</black> <cyan>0.002765</cyan> <magenta>0.007962</magenta> <yellow>0.000572</yellow> </side-a> <side-b completed="true"> <fixer>0.003547</fixer> <black>0.01467</black> <cyan>0.002751</cyan> <magenta>0.009444</magenta> <yellow>0.00047</yellow> </side-b> </sides> </ink-usage> </statistics> </submission> </message> Example file in another format : File 1: <?xml version="1.0" encoding="UTF-8"?><message> <software-version>4.1.1810-65</software-version> <customer-job-id>722739-151801-NBS-CMC400-001-LT_Slit-Merge-NBS-001</customer-job-id> <submission> <submit-number>1</submit-number> <job-submission-id>722739-151801-NBS-CMC400-001-LT_Slit-Merge-NBS-001.s1</job-submission-id> <frame-inches-along-web-initial-value>10.850</frame-inches-along-web-initial-value> <frame-inches-across-web>17.000</frame-inches-across-web> <statistics> <current-copy/> <actual-linear-feet-used>3515.5</actual-linear-feet-used> <sides> <side-a> <frames-printed-ok>3844</frames-printed-ok> <frames-printed-error>0</frames-printed-error> </side-a> <side-b> <frames-printed-ok>3844</frames-printed-ok> <frames-printed-error>0</frames-printed-error> </side-b> </sides> <ink-usage> <units>liters</units> <sides> <side-a completed="true"> <fixer>0.004482</fixer> <black>0.01374</black> <cyan>0.002765</cyan> <magenta>0.007962</magenta> <yellow>0.000572</yellow> </side-a> <side-b completed="true"> <fixer>0.003547</fixer> <black>0.01467</black> <cyan>0.002751</cyan> <magenta>0.009444</magenta> <yellow>0.00047</yellow> File 2: </side-b> </sides> </ink-usage> </statistics> </submission> </message> Query : (index="sample_*") sourcetype=sample_job_xml |where host="XP251" | where source="apc/def/722739-151801-NBS-CMC400-001-LT_Slit-Merge-NBS-001" | spath input=_raw path=message.customer-job-id output=customer-job-id | spath input=_raw path=message.submission output=submission | spath input=submission path=job-submission-id output=job-submission-id | spath input=submission path=statistics.actual-linear-feet-used output=actual-linear-feet-used | spath input=submission path=frame-inches-across-web output=frame-inches-across-web | spath input=submission path=frame-inches-along-web output=frame-inches-along-web | spath input=submission path=job-manifest.end-range.side-a.copy-relative-frame-number output=side-a.copy-relative-frame-number | spath input=submission path=job-manifest.end-range.side-b.copy-relative-frame-number output=side-b.copy-relative-frame-number | spath input=submission path=statistics.sides.side-a.frames-printed-ok output=side-a.frames-printed-ok | spath input=submission path=statistics.sides.side-b.frames-printed-ok output=side-b.frames-printed-ok | spath input=submission path=statistics.ink-usage.sides.side-a.fixer output=side-a.fixer | spath input=submission path=statistics.ink-usage.sides.side-b.fixer output=side-b.fixer | spath input=submission path=statistics.ink-usage.sides.side-a.black output=sides.side-a.black | spath input=submission path=statistics.ink-usage.sides.side-b.black output=side-b.black | spath input=submission path=statistics.ink-usage.sides.side-a.cyan output=side-a.cyan | spath input=submission path=statistics.ink-usage.sides.side-b.cyan output=side-b.cyan | spath input=submission path=statistics.ink-usage.sides.side-a.magenta output=side-a.magenta | spath input=submission path=statistics.ink-usage.sides.side-b.magenta output=side-b.magenta | spath input=submission path=statistics.ink-usage.sides.side-a.yellow output=side-a.yellow | spath input=submission path=statistics.ink-usage.sides.side-b.yellow output=side-b.yellow | fields host,source,customer-job-id,job-submission-id,actual-linear-feet-used,frame-inches-across-web,frame-inches-along-web,side-a.copy-relative-frame-number,side-b.copy-relative-frame-number,side-a.frames-printed-ok,side-b.frames-printed-ok,side-a.fixer,side-b.fixer,sides.side-a.black,side-b.black,side-a.cyan,side-b.cyan,side-a.magenta,side-b.magenta,side-a.yellow,side-b.yellow |eval res=substr('customer-job-id',0,9), numberString=replace(if(like(res, "%v1_%"), mvindex(split(res,"_"),1), if(like(res, "%%"),mvindex(split(res,"-"),0),res)),"\D","") ,Jobnumber=if('customer-job-id'="startup-calibration","Diagnostic",if(len(numberString)=6,numberString,"UnKnown")) | table host,source,Jobnumber,customer-job-id,job-submission-id,actual-linear-feet-used,frame-inches-across-web,frame-inches-along-web,side-a.copy-relative-frame-number,side-b.copy-relative-frame-number,side-a.frames-printed-ok,side-b.frames-printed-ok,side-a.fixer,side-b.fixer,sides.side-a.black,side-b.black,side-a.cyan,side-b.cyan,side-a.magenta,side-b.magenta,side-a.yellow,side-b.yellow | fillnull value="NULL" | where host="XP251"  and 'customer-job-id'="722739-151801-NBS-CMC400-001-LT_Slit-Merge-NBS-001" sample result Here  in have to get the values instead of null when the file is split in to two. Thanks in advance
I am trying to create a search in which I'm using 2 different indexes, and want to produce and combined result as a table. The table should have some fields from both the indexes. There is one filed ... See more...
I am trying to create a search in which I'm using 2 different indexes, and want to produce and combined result as a table. The table should have some fields from both the indexes. There is one filed in both the indexes, with the same name, so I can't pull results from that field. index 1 has a filed called URL and index 2 has a filed also called URL. I want to change the name of the field in one index, eg: URL to URL_1 for index 1.