All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is it possible to outline or create borders to the table present in the body of the mail??   I need bold borders for each of the cells
Hi, I have a dropdown with dynamic query <input type="dropdown" token="clientId" searchWhenChanged="true"> <label>Integrator</label> <fieldForLabel>client_id</fieldForLabel> <fieldForValue>clien... See more...
Hi, I have a dropdown with dynamic query <input type="dropdown" token="clientId" searchWhenChanged="true"> <label>Integrator</label> <fieldForLabel>client_id</fieldForLabel> <fieldForValue>client</fieldForValue> <search> <query>basic search | lookup clients client_id as client_id OUTPUTNEW client_name client_id | eval client = client_name +"(" + client_id +")" | dedup client | table client</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <choice value="*">All</choice> <default>*</default> </input> For display dropdown in dashboard, I want exactly like: clientName(Client_id) ex: Tester(123). but in panel queries I want only clientId in a token, no clientName. any help would be appreciated. thanks !!!
Hello, Is it possible to setup a SQS consumer on Splunk Cloud? I have a vendor that drops logs onto an S3 bucket that is assigned to me but it is under their control. They have also setup an SQS q... See more...
Hello, Is it possible to setup a SQS consumer on Splunk Cloud? I have a vendor that drops logs onto an S3 bucket that is assigned to me but it is under their control. They have also setup an SQS queue and disclosed to me the credentials. How would you suggest that I can pull this into splunk cloud?
Hi Folks, We would like to mask mainframe logs. We can use props and transforms, apart from this do we have any other alternative.    
Looking for help with a splunk search syntax. index=* sourcetype=asa I want to search for dest_port of 123 where the dest_ip does NOT equal 172.16.0.0/16 or 10.0.0.0/8 Basically I want to see des... See more...
Looking for help with a splunk search syntax. index=* sourcetype=asa I want to search for dest_port of 123 where the dest_ip does NOT equal 172.16.0.0/16 or 10.0.0.0/8 Basically I want to see dest_port of 123 where the dest_IP is a public IP and not any of my internal IP range.
{\"reference_id\":\"REF1\",\"sub_reference_id\":\"sub_ref_1\"} required output : table of reference_id, sub_reference_id For the above search string :  I am trying  : rex field=_raw "reference_id... See more...
{\"reference_id\":\"REF1\",\"sub_reference_id\":\"sub_ref_1\"} required output : table of reference_id, sub_reference_id For the above search string :  I am trying  : rex field=_raw "reference_id\\\\":\\\\"(?P<reference_id>.P[^\"]*)" But it is not working. can someone help with the correct rex command to extract the fields explicitely
Hi All, I am trying to implement a few algorithms from the MLTK app. According to the MLTK algorithm documentation only density function groups by data for fit command https://docs.splunk.com/Docum... See more...
Hi All, I am trying to implement a few algorithms from the MLTK app. According to the MLTK algorithm documentation only density function groups by data for fit command https://docs.splunk.com/Documentation/MLApp/5.2.0/User/Algorithms#DensityFunction I found the same after triaging other algorithms. Could someone please confirm if we can use  | fit  OneClassSVM fields* by "date_wday,date_hour" into model_svm. and if it's not supported by default settings is there any way to add this feature? Thanks!
Hi, My splunk instance is not sending email alerts for a new alert th Can soat i just setup. I am getting other alert emails from the same splunk instance but the new alert isn't sending the alerts ... See more...
Hi, My splunk instance is not sending email alerts for a new alert th Can soat i just setup. I am getting other alert emails from the same splunk instance but the new alert isn't sending the alerts although it generates a stats table. I have set the alert to trigger per result. Same alert trigger condition works for other results. Any help is appreciated.
  _timeの修正後の値で検索を行いたいのですが、うまくいきません。 |eval _time = _time +600 時間範囲で検索をしても修正前の値で検索がされます。 ご教授ください。
Hoping to filter a search based on a list of values from a subquery where in both cases it's matching against a rex'd field. index=x    [ search index=x 2e5b422130e64645cb9681a32fd28cb6      | r... See more...
Hoping to filter a search based on a list of values from a subquery where in both cases it's matching against a rex'd field. index=x    [ search index=x 2e5b422130e64645cb9681a32fd28cb6      | rex "downstreamTraceID\=\{ (?<downstream_trace_id>.{32})"      | fields downstream_trace_id   ]  | rex "downstreamTraceID\=\{ (?<downstream_trace_id>.{32})"
I do not understand why I cannot schedule PDF delivery with a particular dashboard?  It is grayed out, and it shouldn't be.  The role that I am working under has the attribute permissions to  schedul... See more...
I do not understand why I cannot schedule PDF delivery with a particular dashboard?  It is grayed out, and it shouldn't be.  The role that I am working under has the attribute permissions to  schedule_search.   I can go into other dashboards that I've created, and the option to schedule PDF delivery is available.   I"m pulling my hair out here because it would appear from the surface it was a permissions issue. This particular dashboard I had copied its XML;  the dashboard was owned by another person who had cloned it from another person.   I copied/pasted the code into a new dashboard, and I thought recreating it from scratch might help, but it's a no-go, the "schedule PDF delivery" option remains grayed out.   Do permissions somehow carrier over in the XML from the original dashboard?  If so, is there a workaround?  Any help is greatly appreciated! 
Hey Splunkers! We are running into an issue with an on-prem distributed deployment where the AWS feed is not extracting nested JSON fields at search time without the use of spath. We get first level... See more...
Hey Splunkers! We are running into an issue with an on-prem distributed deployment where the AWS feed is not extracting nested JSON fields at search time without the use of spath. We get first level and partial second level auto extraction, but it stops there. We need to normalize this data for functionality with friendly name alias's, and would like to avoid end users having to use spath with a long rename macro. yes, KV_MODE is set to JSON on the SH, IDX, and HF. no, we'd rather not perform indexed extractions. We've upped several limits and are unsure why it wouldn't just auto extract at searchtime. Please halp! Here is the issue with using spath in calculated fields as a work around. I can calculate version and id consistently, but next level nested values with lists do not calculate to return fields at search-time.   works -> aws : EVAL-version version spath('BodyJson.Message', "version") works -> aws : EVAL-id id spath('BodyJson.Message', "id") doesn't work -> aws : EVAL-resources resources spath('BodyJson.Message', 'resources{}')   BodyJson: { Message: {"version":"0","id":"-e154-88b-c","detail-type":"Findings - Imported","source":"aws.","account":"4724","time":"2021-01-13T20:09:26Z","region":"ca-central-1","resources":["arn:aws:ca"],"detail":{"findings":[{"ProductArn":"arn:aws:"...   What am I doing wrong here? Also, is there a known limitation on how many cycles of spath calculations the system will run on a specific field? Thanks in advance!
Hello Everyone, I'm hoping I can get some help on this.  We have the InfoSec app on our Splunk single-server deployment.  On the Network Anomalies page, I'm getting the warning "Eventtype 'wg_traffi... See more...
Hello Everyone, I'm hoping I can get some help on this.  We have the InfoSec app on our Splunk single-server deployment.  On the Network Anomalies page, I'm getting the warning "Eventtype 'wg_traffic_allow' does not exist or is disabled." Here is the search the dashboard is attempting to run.   `infosec-indexes` tag=network tag=communicate | streamstats current=f last(_time) as next_time by dest | eval gap = next_time - _time | stats count, avg(gap) as avg_gap, var(gap) as var_gap by dest src | search avg_gap<50 count>500 | stats dc(src)   Based on a google search and looking through the results, I was going to check that the eventtype was shared globally.  That's when I saw that the eventtype is actually defined as  'wg_traffic_allowed' with an 'ed' at the end.  So now my question is where is it even trying to pull that eventtype from?  It's not in the search.  It seems to be searching for tags not a specific eventtype.
Hi Splunkers, Below is my issue: Having multiple xml files, I need to monitor all the files and extracted the values from Status (Failed or Passed) and Message. 1) If status = Failed then extract ... See more...
Hi Splunkers, Below is my issue: Having multiple xml files, I need to monitor all the files and extracted the values from Status (Failed or Passed) and Message. 1) If status = Failed then extract the "2nd last" message of LogItem value (ex: No files found. Stopped. ) 1) If status = Passed then extract the "last" message of LogItem value (ex: Download of file.txt succeeded. ) I am trying as below but need to correct it. <search> | spath output=Message path=LogFile.LogItem.Message{2} | spath output=Timestamp path=LogFile.LogItem{@Timestamp} | spath output=Status path=LogFile.LogItem{@Status} | stats last(eval(Status="Passed")) as Passed_Status first(eval(Status="Failed")) as Failed_Status last(Timestamp) as Timestamp last(Message) as last_Message first(Message) as first_Message by source Thank you in advance!.   FIRST FILE: <LogFile> <LogItem Timestamp="12/15/2020 2:45:04 AM.412" Priority="0" Status="Neutral" Sequence="1"> <Message>Download start at 12/15/2020 2:45:04 AM </Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> <LogItem Timestamp="12/15/2020 2:45:04 AM.414" Priority="0" Status="Neutral" Sequence="2"> <Message>Setup Configuration</Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> <LogItem Timestamp="12/15/2020 2:45:04 AM.420" Priority="0" Status="Neutral" Sequence="3"> <Message>Session starts to connect. </Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> <LogItem Timestamp="12/15/2020 2:45:08 AM.797" Priority="0" Status="Passed" Sequence="4"> <Message>Session connected successfully. </Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> <LogItem Timestamp="12/15/2020 2:45:08 AM.799" Priority="0" Status="Neutral" Sequence="5"> <Message>starts to tranfer file. </Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> <LogItem Timestamp="12/15/2020 2:45:11 AM.226" Priority="0" Status="Failed" Sequence="6"> <Message>No files found. Stopped. </Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> <LogItem Timestamp="12/15/2020 2:45:11 AM.345" Priority="0" Status="Failed" Sequence="7"> <Message>Error StackTrace: at XXX.Program.Main(String[] args) </Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> </LogFile> =================================================================== SECOND File: <LogFile> <LogItem Timestamp="06/12/2020 10:25:04.69" Priority="0" Status="Neutral" Sequence="1"> <Message>Download start at 06/12/2020 10:25:04 </Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> <LogItem Timestamp="06/12/2020 10:25:04.72" Priority="0" Status="Neutral" Sequence="2"> <Message>Setup Configuration</Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> <LogItem Timestamp="06/12/2020 10:25:04.78" Priority="0" Status="Neutral" Sequence="3"> <Message>Session starts to connect. </Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> <LogItem Timestamp="06/12/2020 10:25:05.243" Priority="0" Status="Passed" Sequence="4"> <Message>Session connected successfully. </Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> <LogItem Timestamp="06/12/2020 10:25:05.246" Priority="0" Status="Neutral" Sequence="5"> <Message>starts to tranfer file. </Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> <LogItem Timestamp="1/6/2021 2:45:05 AM.587" Priority="0" Status="Passed" Sequence="6"> <Message>Session connected successfully. </Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> <LogItem Timestamp="1/6/2021 2:45:08 AM.274" Priority="0" Status="Passed" Sequence="7"> <Message>Download of file.txt succeeded. </Message> <StackTrace Depth="1" Method="XXX.Program.Main"/> </LogItem> </LogFile>        
I'm running a search (below) that has results that sometimes in certain fields will display in the gui as empty (null) but aren't.  We I export the results I see "" as the value in the field.  I've t... See more...
I'm running a search (below) that has results that sometimes in certain fields will display in the gui as empty (null) but aren't.  We I export the results I see "" as the value in the field.  I've tried several things to populate this field with data so I can search on it but I've had no luck.  Any thoughts / guidance is greatly appreciated. search string: | rest /servicesNS/-/-/saved/searches | where disabled=0 AND splunk_server="some_server" | fillnull value=na next_scheduled_time   When I export the results and open in notepad, results are below: title,"cron_schedule","dispatch.earliest_time","dispatch.latest_time","alert.expires","next_scheduled_time",action "Access - Distinct Sources","","-48h@h",now,24h,"", "Access - Distinct Users","","-48h@h",now,24h,"",
We are sending logs received by our heavy forwarder to a 3rd-party syslog server. We thought we had it configured so that only WinEventLogs are being forwarded to the 3rd party, but it turns out they... See more...
We are sending logs received by our heavy forwarder to a 3rd-party syslog server. We thought we had it configured so that only WinEventLogs are being forwarded to the 3rd party, but it turns out they're getting everything (sourcetypes they don't need, etc). What is the best way to filter out all of these other events? Either from the universal forwarders to the HF, or from the HF to the 3rd party. For background, here's our basic setup. I'll post our config further down. UFs -> Heavy Forwarder -> 3rd party syslog-ng server The UFs themselves have two tcpouts: one to our indexers, and the other is the heavy forwarder. They are otherwise identical. The heavy forwarder has a props, transforms, and output that is supposed to only route WinEventLog:* to the syslog destination. It has a separate outputs.conf that tells it to turn off local indexing and disable the forwarderindex.filter (maybe this it the problem; not sure). We don't seem to have anything specifically telling the HF not to send anything except for WinEventLogs to the syslog destination; not sure how to implement that, if it's needed. [One side note, we eventually need to send dhcp logs to the 3rd-party. Right now, they're getting there, but the "source" is actually showing as the heavy forwarder. They're configured on the UFs to go to a different index from the Windows event logs, but I would need to stop the inserting of the heavy forwarder as the source, and allow for that index to be sent from the HF to the 3rd party. I might save this all for a different discussion post, but just throwing it out there.] Heavy forwarder props.conf: [source::WinEventLog:*] SEDCMD-tabreplace = s/(?m-s)[\r\n]+/ /g TRANSFORMS-routing = 3rdparty Heavy forwarder transforms.conf [3rdparty] REGEX =. SOURCE_KEY=MetaData:Host DEST_KEY =_SYSLOG_ROUTING FORMAT =to3rdparty Heavy forwarder outputs.conf [syslog] defaultGroup = to3rdparty   [syslog:to3rdparty] sendCookedData=false server = 1.1.1.1:1111 (3rd-party syslog server) type = udp disabled = false priority = <13> maxEventSize = 16384 timestampformat = %b %d %H:%M:%S   UF outputs.conf: [tcpout:our_HF] server= <HF info> useACK=true #sendCookedData = false forceTimebasedAutoLB = false UF inputs.conf [default] _TCP_ROUTING=primary_indexers,our_HF evt_resolve_ad_obj = 0   Thank you for any help!
The following previous splunk thread works fine: https://community.splunk.com/t5/Archive/Insert-sign-for-each-result-in-a-specific-column/m-p/149167 | eval sensor = sensor + " %" 51% Howev... See more...
The following previous splunk thread works fine: https://community.splunk.com/t5/Archive/Insert-sign-for-each-result-in-a-specific-column/m-p/149167 | eval sensor = sensor + " %" 51% However,  if the field name has a space, it does not work: | eval "System Outlook" = "System Outlook" + " %" System Outlook %   I'm assuming it needs some sort of backslash escaping? (I've tried a bunch of ways). I know I can just rename it without the space, but I want to keep the space. Any ideas?
Hi longtime splunker, first time poster so my goal here is to find the most common and uncommon characters in a field across multiple events. event1: commandline="the quick brown fox" event2: comm... See more...
Hi longtime splunker, first time poster so my goal here is to find the most common and uncommon characters in a field across multiple events. event1: commandline="the quick brown fox" event2: commandline="jumped over the lazy dog" the search i've tried   index=data | fields command_line | rex field=command_line "(?<cmd_char>.)" | top cmd_char   this rex only pulls the first char from the field and would want to pull numbers from the whole commandline results from top (or whatever function): char (with " cause spaces would be hard to see here) | count " "  | 7 "e" | 3 "t" | 2 "u" | 2 "h" | 1 "q" | 1  
Hi Everyone, I have one requirement . I have one dashboard which is currently showing the SUCCESS AND FAILURE BUILD RESULT TREND. I have two drop down one is for ORG NAME and other is for BUILD RES... See more...
Hi Everyone, I have one requirement . I have one dashboard which is currently showing the SUCCESS AND FAILURE BUILD RESULT TREND. I have two drop down one is for ORG NAME and other is for BUILD RESULT So suppose I select Yesterday for all ORGS SUCCESS COUNT - 4 FAILURE COUNT - 4 Its showing  Individually SUCCESS AND FAILURE COUNT IN THE TREND I want one more trend that will show the complete result like that is 8 ONE TREND FOR SUCCESS - 4 ONE TREND FOR FAILURE - 4 ONE TOTAL TREND - 8 RIGHT NOW I have SUCCESS AND FAILURE  TREND in that panel. I want one more trend along with this two trends that will show the total  of this two trend. Below is my code <row> <panel> <chart> <title>Jenkins Builds Trending Report</title> <search> <query>index="abc" sourcetype="xyz" $orgname$ $buildresult$ | timechart span=1d count(BuildResult) by BuildResult useother=f limit=25</query> <earliest>$field4.earliest$</earliest> <latest>$field4.latest$</latest> </search> <earliest>$field4.earliest$</earliest> <latest>$field4.latest$</latest> <sampleRatio>1</sampleRatio> </search--> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.text">Date</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.text">Count</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">line</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">connect</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.showMarkers">1</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">right</option> <option name="charting.lineDashStyle">longDash</option> <option name="height">400</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">large</option> <option name="trellis.splitBy">OrgFolderName</option> </chart> </panel> </row> Can someone please guide me on that.
Hi there,  I have installed and set up the app Google Import/Export 2.0.5, I followed all steps but it is not working - I set the key ID (JSON) - I added the "Data input"  > Google Spreadsheet ... See more...
Hi there,  I have installed and set up the app Google Import/Export 2.0.5, I followed all steps but it is not working - I set the key ID (JSON) - I added the "Data input"  > Google Spreadsheet - From my drive, I shared the file with the email (key ID) but it still does not work, do you know if there is another thing we have to do ?