All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Individual sourcetypes cannot be deleted.  Data is deleted by the bucket, which is a subset of an index.  When a bucket is deleted, all events in that bucket are removed from the system. The delete ... See more...
Individual sourcetypes cannot be deleted.  Data is deleted by the bucket, which is a subset of an index.  When a bucket is deleted, all events in that bucket are removed from the system. The delete command does not delete data.  It merely hides it from view. There is no backend command to delete data. If you are fortunate, the undesired sourcetypes are the only ones in their respective indexes.  In that case you can set the frozenTimePeriodInSecs for the index(es) to 1 and wait for Splunk to delete the buckets in the index(es). If you are like most sites and have a mixture of sourcetypes in your indexes then it becomes more of a challenge.  One option: Copy the sourcetypes you wish to keep into a different index using the collect command.  This will impact your ingestion license. Set frozenTimePeriodInSecs on the original index to 1 and wait for buckets to be deleted.  This will delete everything in the index.  On-prem environments can use the clean CLI command to delete the index. Revert the frozenTimePeriodInSecs setting. Use the collect command to copy the desired data back to the original index.  This avoids having to change the queries that use that index name and will impact your ingestion license (again).  In an on-prem environment, you can rename the index to the original name. See https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/RemovedatafromSplunk#Remove_all_data_from_one_or_all_indexes for more information.
I think I understand what you are asking about but without sample ingested data and the new output sample it is harder to decipher what is going wrong.
Hello, we configured rsyslog and it is now receiving logs from appliances, saves them locally to disk and send the copies to the remote destinations on client side. But we have now problems with in... See more...
Hello, we configured rsyslog and it is now receiving logs from appliances, saves them locally to disk and send the copies to the remote destinations on client side. But we have now problems with indexing, as far as data is not being received anymore from the HFs. I think the UFs are undersized to perform all of these activities. Is there a way to check if we have a performance problem now?   Thank you, Andrea
Hello - I am trying to construct a search whereby I can do a lookup of a single table, then rename the fields and change how they're displayed, however the lookup and eval commands don't seem to be w... See more...
Hello - I am trying to construct a search whereby I can do a lookup of a single table, then rename the fields and change how they're displayed, however the lookup and eval commands don't seem to be working as I would like.  The main search I am performing is basic, using some source subnets and then trying to have the lookup reference what area of the business they belong to, below is the lookup portion of my search: index="logs" sourceip="x.x.x.x" OR destip="x.x.x.x" | lookup file.csv cidr AS sourceip OUTPUT field_a AS sourceprovider, field_b AS sourcearea, field_c AS sourcezone , field_d AS sourceregion, cidr AS src_cidr | lookup file.csv cidr AS destip OUTPUT field_a AS destprovider, field_b AS destarea, field_c AS destzone, field_d AS destregion, cidr AS dest_cidr | fillnull value="none" | eval src_details_combined=sourceprovider."-".sourcearea."-".sourcezone ."-".sourceregion | eval dest_details_combined=destprovider."-".destarea."-".destzone."-".destregion | eval src_details_combined=IF(src_details_combined=="none-none-none-none","notfound",src_details_combined) | eval dest_details_combined=IF(dest_details_combined=="none-none-none-none","notfound",dest_details_combined) | stats count values(sourceip) as sourceip values(destip) as destip by src_details_combined, dest_details_combined, rule, dest_port, app | table src_details_combined, dest_details_combined, app, count   When I run the search I do get some results but the  src_details_combined and dest_details_combined fields always return as "notfound" - even though I know the IPs should match in the lookup csv.  Can anyone see where I have gone wrong in my search?
Hello Splunkers !! I hope all is well. There are some sourcetypes in splunk which are having large amount of data but we are not using those sourcetypes in any of the dashboards or saved searches... See more...
Hello Splunkers !! I hope all is well. There are some sourcetypes in splunk which are having large amount of data but we are not using those sourcetypes in any of the dashboards or saved searches. I want to delete those sourcetypes in splunk and I have some questions associated with the deletion of sourcetype as below. 1. What is the best approach to delete the sourcetypes data in splunk ( using the delete command or from backend ) 2. Does the deletion of historical data from those sourcetypes which impact the other useful sourcetype? 3. Does it impact on the corruption of the buckets ? 4. Unused sourcetypes is carrying millions of data. So what will be the fastest approach to delete the large historical data chunks ? Thanks in advance. Advice and suggestions are really appreciated !!
No option to wrap it - honestly you may want to replace your viz with Tables which can treat all of these as text and can wrap.   Showing a lot of information like you are there are options tha... See more...
No option to wrap it - honestly you may want to replace your viz with Tables which can treat all of these as text and can wrap.   Showing a lot of information like you are there are options that a table will provide that may fit better.
Hello, I am responsible for providing self-signed SSL certificates for Splunk servers. Could you guide me, considering that I am working in a distributed architecture consisting of SH, 2 INDEXERS ... See more...
Hello, I am responsible for providing self-signed SSL certificates for Splunk servers. Could you guide me, considering that I am working in a distributed architecture consisting of SH, 2 INDEXERS plus a server handling DS, and forwarders? How many certificates will I need to generate, and do the forwarders also require SSL certificates? If possible, I would greatly appreciate it if you could provide any relevant documentation to assist me in this process. Best regards,
Can the Splunkbase app - "Splunk AI assistant for SPL", be installed on an on-prem deployment of Splunk enterprise, or is the app only for public cloud deployments of Splunk Enterprise.  If this is n... See more...
Can the Splunkbase app - "Splunk AI assistant for SPL", be installed on an on-prem deployment of Splunk enterprise, or is the app only for public cloud deployments of Splunk Enterprise.  If this is not for on-prem, are there plans to build and app for on-prem, and what are the current timelines.
We are using v9 format of logs in splunk. It is working fine and we are able to see logs in splunk as expected. We added 4 more fields in transform.conf and test the addon in splunk. Then additiona... See more...
We are using v9 format of logs in splunk. It is working fine and we are able to see logs in splunk as expected. We added 4 more fields in transform.conf and test the addon in splunk. Then additional fields taking the value of s3_filename, bucket name and prefix which are added at the end which is not correct behavior.   We are looking for solution with that we should be able to parse correct value in correct field and the additional fields should have null values if there is no values for them in logs. transform.conf [proxylogs_fields] DELIMS = "," FIELDS = Timestamp,policy_identities,src,src_translated_ip,dest,content_type,action,url,http_referrer,http_user_agent,status,requestSize,responseSize,responseBodySize,sha256,category,av_detection,pua,amp_disposition,amp_malwarename,amp_score,policy_identity_type,blocked_category,identities,identity_type,request_method,dlp_status,certificate_errors,filename,rulesetID,ruleID,destinationListID,isolateAction,fileAction,warnStatus,forwarding_method,Producer,test_feild1,test_field2,test_field3,test_field4,s3_filename,aws_bucket_name,aws_prefix props.conf [cisco:cloud_security:proxy] REPORT-proxylogs-fields = proxylogs_fields,extract_url_domain LINE_BREAKER = ([\r\n]+) # EVENT_BREAKER = ([\r\n]+) # EVENT_BREAKER_ENABLE = true SHOULD_LINEMERGE = false CHARSET = AUTO disabled = false TRUNCATE = 1000000 MAX_EVENTS = 1000000 EVAL-product = "Cisco Secure Access and Umbrella" EVAL-vendor = "Cisco" EVAL-vendor_product = "Cisco Secure Access/Umbrella" MAX_TIMESTAMP_LOOKAHEAD = 22 NO_BINARY_CHECK = true TIME_PREFIX = ^ TIME_FORMAT = "%Y-%m-%d %H:%M:%S" TZ = UTC FIELDALIAS-bytes_in = requestSize as bytes_in FIELDALIAS-bytes_out = responseSize as bytes_out EVAL-action = lower(action) EVAL-app = "Cisco Cloud Security" FIELDALIAS-http_content_type = content_type as http_content_type EVAL-http_user_agent_length = len(http_user_agent) EVAL-url_length = len(url) EVAL-dest = if(isnotnull(dest),dest,url_domain) EVAL-bytes = requestSize + responseSize  
Can`t seem to get my head round this one - I`ve got a table and would like the users to be able to click on a row and to add a Summary comment, but there`s a bug in the code. The comments get submitt... See more...
Can`t seem to get my head round this one - I`ve got a table and would like the users to be able to click on a row and to add a Summary comment, but there`s a bug in the code. The comments get submitted BEFORE I click on the Submit button, which doesn`t seems to work anyway.     <form version="1.1" theme="light" script="TA-images_and-_files:tokenlinks.js"> <label>Report</label> <search> <query>| makeresults|eval Date=strftime(_time,"%d/%m/%Y")|fields - _time</query> <done> <set token="defaut_time">$result.Date$</set> </done> </search> <fieldset submitButton="false"> <input type="dropdown" token="date_tok" searchWhenChanged="true"> <label>Date:</label> <fieldForLabel>Date</fieldForLabel> <fieldForValue>Date</fieldForValue> <search> <query>| makeresults | timechart span=1d count | sort - _time | eval Date=strftime(_time, "%d/%m/%Y"), earliest=relative_time(_time, "@d") | table Date, earliest | head 7 | sort - earliest</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <default>$defaut_time$</default> </input> <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> </input> </fieldset> <row> <panel id="input_panel" depends="$show_input$"> <input type="text" token="Summary"> <label>Summary</label> </input> <input type="text" token="Date"> <label>Date</label> </input> <input type="text" token="Time"> <label>Time</label> </input> <input type="text" token="Shift"> <label>Shift</label> </input> <html> <div> <button type="button" id="buttonId" class="btn btn-primary">Submit</button> <button style="margin-left:10px;" class="btn" data-token-json="{&quot;show_input&quot;: null}">Cancel</button> </div> </html> </panel> </row> <row depends="$hideMe$"> <panel> <table> <search> <done> <unset token="form.Summary"></unset> <unset token="form.Date"></unset> <unset token="form.Time"></unset> <unset token="form.Shift"></unset> <unset token="show_input"></unset> </done> <query>| inputlookup handover_timeline_comments.csv | append [ | makeresults | eval "Summary" = "$form.Summary$", Shift="$form.Shift$", Date="$form.Date$", Time="$form.Time$" ] | outputlookup handover_timeline_comments.csv</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <refresh>30</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table> <search> <query>| makeresults count=24 | eval Date= "$date_tok$", Shift="$shift_tok$" | streamstats count as Time | eval Time=if(Time&lt;10, "0".Time.":00", Time.":00") | eval Time=case( Shift == "Night" AND Time &gt;= "19:00", Time, Shift == "Day" AND Time &gt;= "07:00" AND Time &lt;= "18:00", Time, 1==1, null ) | where isnotnull(Time) | append [ | makeresults count=24 | streamstats count as Time | eval Time=if(Time&lt;10, "0".Time.":00", Time.":00") | table Time | eval Date= "$date_tok$", Shift="$shift_tok$" | eval Time=case( Shift == "Night" AND Time &lt;= "06:00", Time, 1==1, null ) | where isnotnull(Time) ] | eval Summary="" | fields - _time | lookup handover_timeline_comments.csv Date Shift Time OUTPUT Summary | eventstats last(Summary) as Summary by Date Shift Time</query> <earliest>-24h@h</earliest> <latest>now</latest> <refresh>10s</refresh> </search> <option name="count">12</option> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="form.Date">$row.Date$</set> <set token="form.Shift">$row.Shift$</set> <set token="form.Time">$row.Time$</set> <set token="show_input">true</set> </drilldown> </table> </panel> </row> </form>     .js:   requirejs([ '../app/simple_xml_examples/libs/jquery-3.6.0-umd-min', '../app/simple_xml_examples/libs/underscore-1.6.0-umd-min', 'util/console', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, _, console, mvc) { function setToken(name, value) { console.log('Setting Token %o=%o', name, value); var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } $('.dashboard-body').on('click', '[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); var target = $(e.currentTarget); var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value === null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); });    
Well... the typical approach is to get a piece of paper (or some excel sheet) and list all (kinds of) sources you're gonna be getting the events from. And work with that - think who should have acces... See more...
Well... the typical approach is to get a piece of paper (or some excel sheet) and list all (kinds of) sources you're gonna be getting the events from. And work with that - think who should have access to that data, how much of that data you're gonna need, for how long it will have to be stored, what are use cases you'll be implementing using this data (because that also can affect the data distribution). It's a bit complicated (and that's why Splunk Certified Architect certification is something you get only after you've already certified for Admin and while the course itself might be taken before that there's not much point in doing so) and it's hard to say in a short post about all possible caveats regarding proper indexes architecture. But bear in mind that indexes in Splunk do not define the data contained within them in any way (at least from the technical point of view). They are just... "sacks" for data thrown in. And you can - if needed - have several different "kinds" of data within one index and usually (unless you have some strange border case) it doesn't matter and doesn't give you a significant performance penalty. You simply choose some of that data when searching by specifying metadata fields (like host, source, sourcetype) as your search terms. One exception here (which I'll not dig into since we're apparently not talking about it at the moment - there are two types of indexes - event indexes and metrics indexes).
The multiselect dropdown is essentially a multi-value field. This can be delimited (using the delimiter option) for example setting it to a comma. Since you have string values, you might want to set ... See more...
The multiselect dropdown is essentially a multi-value field. This can be delimited (using the delimiter option) for example setting it to a comma. Since you have string values, you might want to set valuePrefix and valueSuffix to double quotes ("), then you can use the token in a IN clause. That is, it depends on how you are using the token in your dashboard searches.
Hi @hazem , as I said, On the LB you have only to configure the rule to associate the receiving port with the ip addresses and port of the receivers. In addition, depending on the LB, you should c... See more...
Hi @hazem , as I said, On the LB you have only to configure the rule to associate the receiving port with the ip addresses and port of the receivers. In addition, depending on the LB, you should configure how the LB checks if the receivers are alive, but this isn't a Splunk configuration and it depends on the LB (and I cannot help you. In other word: you must define a VIP and a port to use to send logs from the syslog sources, and then associate these VIP and port to the destination IP addresses and port (of the UFs. There isn't a best practice, only that the LB must check if the destinations are alive. There's only one not clear thing: why are you speaking of a single intermediate Forwarder? To have HA, you need at least two UFs, otherwise the LB is completely useless. Ciao. Giuseppe
If you were starting over, or even just looking to refactor what you have, number 3 is the most important question. What do you (and your users) want to get out of putting all this data into Splunk. ... See more...
If you were starting over, or even just looking to refactor what you have, number 3 is the most important question. What do you (and your users) want to get out of putting all this data into Splunk. The other questions are supplementary to this. Establish some short term goals and start building / refactoring small and build capabilities and features that your users want, let them guide you.
Okay...almost there. Adding the "as host" evidently forces the column / field name to read as "host" which must match the property setting of the UI.  Simple enough. However, the query is returning... See more...
Okay...almost there. Adding the "as host" evidently forces the column / field name to read as "host" which must match the property setting of the UI.  Simple enough. However, the query is returning a string rather than several strings.  So, my expected result for the field would be to see Server001 Server004 Server007 ...but what I get is Server001Server004Server007 Is it possible to get these to be individual choices?
Okay, if I understand the two of you correctly then I'm in a pickle...not a fan of swimming in the brine so to speak.  There was no "architecting" or "engineering" or "questioning" involved so I susp... See more...
Okay, if I understand the two of you correctly then I'm in a pickle...not a fan of swimming in the brine so to speak.  There was no "architecting" or "engineering" or "questioning" involved so I suspect the Splunk Admin presumed we knew what we wanted.  I walked into this cold and after the fact w/ a "make it work" directive. Keep in mind, two weeks ago I could spell "Splunk" but had never used it.  I find it quite complicated. Soooooooo, if I were to start over what would be the needed requirement one must address w/ respect to having the index(es) built?  From the above I can see we should have addressed : 1.  Data Retention period by Dev/Test/Prod  2.  Access restrictions (if any) by the same 3.  What are you wanting to measure?  (implies "knowing" the data) 4.  Age / staleness of data allowed 5.  Volume size of the ingested logs per server that are used to build the index(es) Would there happen to be a "best practice" approach that should be used, or should have been used, regarding the initial build of the environment?   Thanks all!  
Dear @gcusello  I have already configured rsyslog on both intermediate forwarders and need to set up the load balancer to receive traffic from syslog devices and forward it to a single backend inter... See more...
Dear @gcusello  I have already configured rsyslog on both intermediate forwarders and need to set up the load balancer to receive traffic from syslog devices and forward it to a single backend intermediate forwarder. If the load balancer administrator asks, what is the best practice for configuring the load balancer to forward traffic to our intermediate forwarder?
Splunk support recently upgraded Datapunctum Alert Manager Enterprise to version 3.1.1, but it is now broken.  When I open the app it directs me to complete the update tasks.  When I try to execute t... See more...
Splunk support recently upgraded Datapunctum Alert Manager Enterprise to version 3.1.1, but it is now broken.  When I open the app it directs me to complete the update tasks.  When I try to execute the first task, I get an error saying Internal Server Error and can't progress any further.  The documentation doesn't help with troubleshooting this issue.
  Try something like this <query> | inputlookup servers | where Enclave="$ENCLAVE$" AND Type="$Type$" AND Application="$APPLICATION$" | stats values(host) as host</query>   Otherw... See more...
  Try something like this <query> | inputlookup servers | where Enclave="$ENCLAVE$" AND Type="$Type$" AND Application="$APPLICATION$" | stats values(host) as host</query>   Otherwise the field name is "values(host)" which doesn't match with fieldForLabel and fieldForValue
Hi @hrawat , open soon a case to Splunk Support. Ciao. Giuseppe