All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are using v9 format of logs in splunk. It is working fine and we are able to see logs in splunk as expected. We added 4 more fields in transform.conf and test the addon in splunk. Then additiona... See more...
We are using v9 format of logs in splunk. It is working fine and we are able to see logs in splunk as expected. We added 4 more fields in transform.conf and test the addon in splunk. Then additional fields taking the value of s3_filename, bucket name and prefix which are added at the end which is not correct behavior.   We are looking for solution with that we should be able to parse correct value in correct field and the additional fields should have null values if there is no values for them in logs. transform.conf [proxylogs_fields] DELIMS = "," FIELDS = Timestamp,policy_identities,src,src_translated_ip,dest,content_type,action,url,http_referrer,http_user_agent,status,requestSize,responseSize,responseBodySize,sha256,category,av_detection,pua,amp_disposition,amp_malwarename,amp_score,policy_identity_type,blocked_category,identities,identity_type,request_method,dlp_status,certificate_errors,filename,rulesetID,ruleID,destinationListID,isolateAction,fileAction,warnStatus,forwarding_method,Producer,test_feild1,test_field2,test_field3,test_field4,s3_filename,aws_bucket_name,aws_prefix props.conf [cisco:cloud_security:proxy] REPORT-proxylogs-fields = proxylogs_fields,extract_url_domain LINE_BREAKER = ([\r\n]+) # EVENT_BREAKER = ([\r\n]+) # EVENT_BREAKER_ENABLE = true SHOULD_LINEMERGE = false CHARSET = AUTO disabled = false TRUNCATE = 1000000 MAX_EVENTS = 1000000 EVAL-product = "Cisco Secure Access and Umbrella" EVAL-vendor = "Cisco" EVAL-vendor_product = "Cisco Secure Access/Umbrella" MAX_TIMESTAMP_LOOKAHEAD = 22 NO_BINARY_CHECK = true TIME_PREFIX = ^ TIME_FORMAT = "%Y-%m-%d %H:%M:%S" TZ = UTC FIELDALIAS-bytes_in = requestSize as bytes_in FIELDALIAS-bytes_out = responseSize as bytes_out EVAL-action = lower(action) EVAL-app = "Cisco Cloud Security" FIELDALIAS-http_content_type = content_type as http_content_type EVAL-http_user_agent_length = len(http_user_agent) EVAL-url_length = len(url) EVAL-dest = if(isnotnull(dest),dest,url_domain) EVAL-bytes = requestSize + responseSize  
Can`t seem to get my head round this one - I`ve got a table and would like the users to be able to click on a row and to add a Summary comment, but there`s a bug in the code. The comments get submitt... See more...
Can`t seem to get my head round this one - I`ve got a table and would like the users to be able to click on a row and to add a Summary comment, but there`s a bug in the code. The comments get submitted BEFORE I click on the Submit button, which doesn`t seems to work anyway.     <form version="1.1" theme="light" script="TA-images_and-_files:tokenlinks.js"> <label>Report</label> <search> <query>| makeresults|eval Date=strftime(_time,"%d/%m/%Y")|fields - _time</query> <done> <set token="defaut_time">$result.Date$</set> </done> </search> <fieldset submitButton="false"> <input type="dropdown" token="date_tok" searchWhenChanged="true"> <label>Date:</label> <fieldForLabel>Date</fieldForLabel> <fieldForValue>Date</fieldForValue> <search> <query>| makeresults | timechart span=1d count | sort - _time | eval Date=strftime(_time, "%d/%m/%Y"), earliest=relative_time(_time, "@d") | table Date, earliest | head 7 | sort - earliest</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <default>$defaut_time$</default> </input> <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> </input> </fieldset> <row> <panel id="input_panel" depends="$show_input$"> <input type="text" token="Summary"> <label>Summary</label> </input> <input type="text" token="Date"> <label>Date</label> </input> <input type="text" token="Time"> <label>Time</label> </input> <input type="text" token="Shift"> <label>Shift</label> </input> <html> <div> <button type="button" id="buttonId" class="btn btn-primary">Submit</button> <button style="margin-left:10px;" class="btn" data-token-json="{&quot;show_input&quot;: null}">Cancel</button> </div> </html> </panel> </row> <row depends="$hideMe$"> <panel> <table> <search> <done> <unset token="form.Summary"></unset> <unset token="form.Date"></unset> <unset token="form.Time"></unset> <unset token="form.Shift"></unset> <unset token="show_input"></unset> </done> <query>| inputlookup handover_timeline_comments.csv | append [ | makeresults | eval "Summary" = "$form.Summary$", Shift="$form.Shift$", Date="$form.Date$", Time="$form.Time$" ] | outputlookup handover_timeline_comments.csv</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <refresh>30</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table> <search> <query>| makeresults count=24 | eval Date= "$date_tok$", Shift="$shift_tok$" | streamstats count as Time | eval Time=if(Time&lt;10, "0".Time.":00", Time.":00") | eval Time=case( Shift == "Night" AND Time &gt;= "19:00", Time, Shift == "Day" AND Time &gt;= "07:00" AND Time &lt;= "18:00", Time, 1==1, null ) | where isnotnull(Time) | append [ | makeresults count=24 | streamstats count as Time | eval Time=if(Time&lt;10, "0".Time.":00", Time.":00") | table Time | eval Date= "$date_tok$", Shift="$shift_tok$" | eval Time=case( Shift == "Night" AND Time &lt;= "06:00", Time, 1==1, null ) | where isnotnull(Time) ] | eval Summary="" | fields - _time | lookup handover_timeline_comments.csv Date Shift Time OUTPUT Summary | eventstats last(Summary) as Summary by Date Shift Time</query> <earliest>-24h@h</earliest> <latest>now</latest> <refresh>10s</refresh> </search> <option name="count">12</option> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="form.Date">$row.Date$</set> <set token="form.Shift">$row.Shift$</set> <set token="form.Time">$row.Time$</set> <set token="show_input">true</set> </drilldown> </table> </panel> </row> </form>     .js:   requirejs([ '../app/simple_xml_examples/libs/jquery-3.6.0-umd-min', '../app/simple_xml_examples/libs/underscore-1.6.0-umd-min', 'util/console', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, _, console, mvc) { function setToken(name, value) { console.log('Setting Token %o=%o', name, value); var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } $('.dashboard-body').on('click', '[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); var target = $(e.currentTarget); var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value === null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); });    
Well... the typical approach is to get a piece of paper (or some excel sheet) and list all (kinds of) sources you're gonna be getting the events from. And work with that - think who should have acces... See more...
Well... the typical approach is to get a piece of paper (or some excel sheet) and list all (kinds of) sources you're gonna be getting the events from. And work with that - think who should have access to that data, how much of that data you're gonna need, for how long it will have to be stored, what are use cases you'll be implementing using this data (because that also can affect the data distribution). It's a bit complicated (and that's why Splunk Certified Architect certification is something you get only after you've already certified for Admin and while the course itself might be taken before that there's not much point in doing so) and it's hard to say in a short post about all possible caveats regarding proper indexes architecture. But bear in mind that indexes in Splunk do not define the data contained within them in any way (at least from the technical point of view). They are just... "sacks" for data thrown in. And you can - if needed - have several different "kinds" of data within one index and usually (unless you have some strange border case) it doesn't matter and doesn't give you a significant performance penalty. You simply choose some of that data when searching by specifying metadata fields (like host, source, sourcetype) as your search terms. One exception here (which I'll not dig into since we're apparently not talking about it at the moment - there are two types of indexes - event indexes and metrics indexes).
The multiselect dropdown is essentially a multi-value field. This can be delimited (using the delimiter option) for example setting it to a comma. Since you have string values, you might want to set ... See more...
The multiselect dropdown is essentially a multi-value field. This can be delimited (using the delimiter option) for example setting it to a comma. Since you have string values, you might want to set valuePrefix and valueSuffix to double quotes ("), then you can use the token in a IN clause. That is, it depends on how you are using the token in your dashboard searches.
Hi @hazem , as I said, On the LB you have only to configure the rule to associate the receiving port with the ip addresses and port of the receivers. In addition, depending on the LB, you should c... See more...
Hi @hazem , as I said, On the LB you have only to configure the rule to associate the receiving port with the ip addresses and port of the receivers. In addition, depending on the LB, you should configure how the LB checks if the receivers are alive, but this isn't a Splunk configuration and it depends on the LB (and I cannot help you. In other word: you must define a VIP and a port to use to send logs from the syslog sources, and then associate these VIP and port to the destination IP addresses and port (of the UFs. There isn't a best practice, only that the LB must check if the destinations are alive. There's only one not clear thing: why are you speaking of a single intermediate Forwarder? To have HA, you need at least two UFs, otherwise the LB is completely useless. Ciao. Giuseppe
If you were starting over, or even just looking to refactor what you have, number 3 is the most important question. What do you (and your users) want to get out of putting all this data into Splunk. ... See more...
If you were starting over, or even just looking to refactor what you have, number 3 is the most important question. What do you (and your users) want to get out of putting all this data into Splunk. The other questions are supplementary to this. Establish some short term goals and start building / refactoring small and build capabilities and features that your users want, let them guide you.
Okay...almost there. Adding the "as host" evidently forces the column / field name to read as "host" which must match the property setting of the UI.  Simple enough. However, the query is returning... See more...
Okay...almost there. Adding the "as host" evidently forces the column / field name to read as "host" which must match the property setting of the UI.  Simple enough. However, the query is returning a string rather than several strings.  So, my expected result for the field would be to see Server001 Server004 Server007 ...but what I get is Server001Server004Server007 Is it possible to get these to be individual choices?
Okay, if I understand the two of you correctly then I'm in a pickle...not a fan of swimming in the brine so to speak.  There was no "architecting" or "engineering" or "questioning" involved so I susp... See more...
Okay, if I understand the two of you correctly then I'm in a pickle...not a fan of swimming in the brine so to speak.  There was no "architecting" or "engineering" or "questioning" involved so I suspect the Splunk Admin presumed we knew what we wanted.  I walked into this cold and after the fact w/ a "make it work" directive. Keep in mind, two weeks ago I could spell "Splunk" but had never used it.  I find it quite complicated. Soooooooo, if I were to start over what would be the needed requirement one must address w/ respect to having the index(es) built?  From the above I can see we should have addressed : 1.  Data Retention period by Dev/Test/Prod  2.  Access restrictions (if any) by the same 3.  What are you wanting to measure?  (implies "knowing" the data) 4.  Age / staleness of data allowed 5.  Volume size of the ingested logs per server that are used to build the index(es) Would there happen to be a "best practice" approach that should be used, or should have been used, regarding the initial build of the environment?   Thanks all!  
Dear @gcusello  I have already configured rsyslog on both intermediate forwarders and need to set up the load balancer to receive traffic from syslog devices and forward it to a single backend inter... See more...
Dear @gcusello  I have already configured rsyslog on both intermediate forwarders and need to set up the load balancer to receive traffic from syslog devices and forward it to a single backend intermediate forwarder. If the load balancer administrator asks, what is the best practice for configuring the load balancer to forward traffic to our intermediate forwarder?
Splunk support recently upgraded Datapunctum Alert Manager Enterprise to version 3.1.1, but it is now broken.  When I open the app it directs me to complete the update tasks.  When I try to execute t... See more...
Splunk support recently upgraded Datapunctum Alert Manager Enterprise to version 3.1.1, but it is now broken.  When I open the app it directs me to complete the update tasks.  When I try to execute the first task, I get an error saying Internal Server Error and can't progress any further.  The documentation doesn't help with troubleshooting this issue.
  Try something like this <query> | inputlookup servers | where Enclave="$ENCLAVE$" AND Type="$Type$" AND Application="$APPLICATION$" | stats values(host) as host</query>   Otherw... See more...
  Try something like this <query> | inputlookup servers | where Enclave="$ENCLAVE$" AND Type="$Type$" AND Application="$APPLICATION$" | stats values(host) as host</query>   Otherwise the field name is "values(host)" which doesn't match with fieldForLabel and fieldForValue
Hi @hrawat , open soon a case to Splunk Support. Ciao. Giuseppe
Let me disagree here with you on one thing. Adding a load balancer in front of syslog receivers does not usually solve any problems (especially because LBs typically "don't speak" syslog; and even mo... See more...
Let me disagree here with you on one thing. Adding a load balancer in front of syslog receivers does not usually solve any problems (especially because LBs typically "don't speak" syslog; and even more so since "syslog" can mean many different things - from RFC5424-compliant message to "just throw anyting at UDP/514") and introduces additional layer of complexity and a potential SPOF.
At the moment the query reads as : | inputlookup my_servers | stats values(host) Obviously there is no token at the moment to restrict the list so I should get a rather long list of servers to sel... See more...
At the moment the query reads as : | inputlookup my_servers | stats values(host) Obviously there is no token at the moment to restrict the list so I should get a rather long list of servers to select from. The source for the XML frame or block is <input type="multiselect" token="SERVERS">      <label>SERVERS</label>      <search>           <query> | inputlookup servers                              | stats values(host)</query>      </search>      <fieldForLabel>host</fieldForLabel>      <fieldForValue>host</fieldForValue>      <delimiter> </delimiter> </input>
Different crashes during tcpout reload. Received fatal signal 6 (Aborted) on PID . Cause: Signal sent by PID running under UID . Crashing thread: indexerPipe_1 Backtrace (PIC build): [0x00... See more...
Different crashes during tcpout reload. Received fatal signal 6 (Aborted) on PID . Cause: Signal sent by PID running under UID . Crashing thread: indexerPipe_1 Backtrace (PIC build): [0x000014BC540AFB8F] gsignal + 271 (libc.so.6 + 0x4EB8F) [0x000014BC54082EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055BCEBEFC1A7] __assert_fail + 135 (splunkd + 0x51601A7) [0x000055BCEBEC4BD9] ? (splunkd + 0x5128BD9) [0x000055BCE9013E72] _ZN34AutoLoadBalancedConnectionStrategyD0Ev + 18 (splunkd + 0x2277E72) [0x000055BCE905DC99] _ZN14TcpOutputGroupD1Ev + 217 (splunkd + 0x22C1C99) [0x000055BCE905E002] _ZN14TcpOutputGroupD0Ev + 18 (splunkd + 0x22C2002) [0x000055BCE905FC6F] _ZN15TcpOutputGroups14checkSendStateEv + 623 (splunkd + 0x22C3C6F) [0x000055BCE9060F08] _ZN15TcpOutputGroups4sendER15CowPipelineData + 88 (splunkd + 0x22C4F08) [0x000055BCE90002FA] _ZN18TcpOutputProcessor7executeER15CowPipelineData + 362 (splunkd + 0x22642FA) [0x000055BCE9829628] _ZN9Processor12executeMultiER18PipelineDataVectorPS0_ + 72 (splunkd + 0x2A8D628) [0x000055BCE8D29D25] _ZN8Pipeline4mainEv + 1157 (splunkd + 0x1F8DD25) [0x000055BCEBF715EE] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x51D55EE) [0x000055BCEBF716FB] _ZN6Thread8callMainEPv + 139 (splunkd + 0x51D56FB) [0x000014BC552AC1DA] ? (libpthread.so.0 + 0x81DA) Another reload crash Backtrace (PIC build): [0x00007F456828700B] gsignal + 203 (libc.so.6 + 0x2100B) [0x00007F4568266859] abort + 299 (libc.so.6 + 0x859) [0x0000560602B5B4B7] __assert_fail + 135 (splunkd + 0x5AAA4B7) [0x00005605FF66297A] _ZN15TcpOutputClientD1Ev + 3130 (splunkd + 0x25B197A) [0x00005605FF6629F2] _ZN15TcpOutputClientD0Ev + 18 (splunkd + 0x25B19F2) [0x0000560602AD7807] _ZN9EventLoop3runEv + 839 (splunkd + 0x5A26807) [0x00005605FF3555AD] _ZN11Distributed11EloopRunner4mainEv + 205 (splunkd + 0x22A45AD) [0x0000560602BD03FE] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x5B1F3FE) [0x0000560602BD050B] _ZN6Thread8callMainEPv + 139 (splunkd + 0x5B1F50B) [0x00007F4568CAD609] ? (libpthread.so.0 + 0x2609) [0x00007F4568363353] clone + 67 (libc.so.6 + 0xFD353) Linux / myhost / 5.15.0-1055-aws / #60~20.04.1-Ubuntu SMP Thu assertion_failure="!_hasDataInTransit" assertion_function="virtual TcpOutputClient::~TcpOutputClient()"   Starting Splunk 9.2, splunk outputs.conf is reloadable. Whenever DC pulls bundle from DS, depending on the changes, during reload, conf files are reloaded. One of the conf file is outputs.conf. Prior to 9.2 outputs.conf was not reloadable that means hitting following endpoint would do nothing. /data/outputs/tcp/server or  https://<host>:<port>/servicesNS/-/-/admin/tcpout-group/_reload Behavior is changed from 9.2 and now outputs.conf is reloadable. However reloading outputs.conf is very complex process as it involves shutdown tcpout groups safely. Still there are cases where splunk crashes. We are working on fixing reported crashes. NOTE: (Splunkcloud and others), following workaround is NOT for a crash caused by  /debug/refresh induced forced reload.  There is no workaround available for a crash caused by /debug/refresh, except not to use /debug/refresh. Workaround As mentioned before 9.2 outputs.conf was never reloadable ( no-op for _reload), thus no crashes/complications Set in local/apps.conf as a workaround. [triggers] reload.outputs = simple With setting above, splunk will take no action on tcpout(outputs.conf) reload( a behavior  that was before 9.2)   If outputs.conf is changed via DS, restart splunk.
Hi @hazem , adding a bit to @PickleRick information: you can configure rsyslog (or syslog-ng) server on your UFs: you don't need to install it because it's already installe, you have only to config... See more...
Hi @hazem , adding a bit to @PickleRick information: you can configure rsyslog (or syslog-ng) server on your UFs: you don't need to install it because it's already installe, you have only to configure it to understand where tp write logs. for more infos see at https://www.rsyslog.com/guides/ or https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-basic_configuration_of_rsyslog  on the LB, you need only to configure the receiving port and the destination port and addresses. Some LBs need also to configure a way to check if the destinations are alive, but this configuration depends on your LB and it's indipendent by Splunk or rsyslog receiver. Ciao. Giuseppe
Splunk recommendation is to NOT send syslog data directly (or via a LB) to a Splunk instance.  Syslog should be sent to a dedicated syslog server (running syslog-ng or rsyslog) and then forwarded to ... See more...
Splunk recommendation is to NOT send syslog data directly (or via a LB) to a Splunk instance.  Syslog should be sent to a dedicated syslog server (running syslog-ng or rsyslog) and then forwarded to Splunk.  The syslog servers should be positioned as close to the data source as possible to avoid data loss.  Use of a load balancer in front of the syslog servers is recommended for resiliency. For more information, see https://docs.splunk.com/Documentation/Splunk/latest/Data/HowSplunkEnterprisehandlessyslogdata#Caveats_to_using_Splunk_Enterprise_as_a_syslog_server_or_message_sender https://www.splunk.com/en_us/blog/tips-and-tricks/high-performance-syslogging-for-splunk-using-syslog-ng-part-1.html  
There is no such document because generally it's not recommended to LB "syslog" traffic. You should keep your syslog receiver as simple as possible and as close to the source as possible.
Splunk doesn't look "backwards" so you have to think backwards So as Splunk by default returns events in reverse chronological order, you have to | reverse them to get them in straight chronolo... See more...
Splunk doesn't look "backwards" so you have to think backwards So as Splunk by default returns events in reverse chronological order, you have to | reverse them to get them in straight chronological order. 2. Assuming that you already have the REQ field extracted, keep track of its values for a 7-minute long window | streamstats time_window=7m values(REQ) AS reqvals 3. Now you can find those events matching your searchnstring and not having the value of REQ copied over from earlier events | search "Error occurred during message exchange" AND NOT reqvals="INI" Two caveats 1. The search might be slow. Depending on your actual data you might make it faster by searching only for "Error occurred during message exchange" OR REQ 2. Remember that a!=b is not the same as NOT a=b. Especially when dealing with multivalued fields.
Is there any documentation in Splunk's documentation to guide a load balancer administrator on configuring the load balancer in front of intermediate forwarders to receive syslog traffic from securit... See more...
Is there any documentation in Splunk's documentation to guide a load balancer administrator on configuring the load balancer in front of intermediate forwarders to receive syslog traffic from security devices on port 514?