All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello! I am new to splunk and aws. I just set up splunk in a linux server in aws.  I want to now ingest sample data into AWS to forward it so I can view it in splunk. I know i need to use the univer... See more...
Hello! I am new to splunk and aws. I just set up splunk in a linux server in aws.  I want to now ingest sample data into AWS to forward it so I can view it in splunk. I know i need to use the universal forwarder. but how do i actually go about ingesting the data into a S3 bucket to then be forwarded to splunk.  Yes I know i can ingest sample data straight into splunk but I am trying to get real world experience to get a job in cybersecurity!
Hi, we are decomisioning our splunk infra, our company was purchased and the new management want to free resources :(. We have 3 search heads (stand alone) + 2 indexers (clustered). They ask me to ... See more...
Hi, we are decomisioning our splunk infra, our company was purchased and the new management want to free resources :(. We have 3 search heads (stand alone) + 2 indexers (clustered). They ask me to break the indexer cluster to free storage, cpu and mem, i've found docs about removing nodes but keeping the cluster.  We want to keep just one search head (the one with license master) and one indexer.  Is there documentation to "break" the cluster and keep just one indexer in stand alone mode? (we need to keep info for "auditing reasons").  I know i can just put one in maintenance mode and power off but this procedure is intended to reboot/replace in some time the "faulty" indexer, not to keep it down for ever and ever.  Regards.  
I have ClamAV running on all my linux hosts (universal forwarders) and all logsseems to be fine except clamav logs. ClamAV scan report has unusual log format (see below). I need help with how to inge... See more...
I have ClamAV running on all my linux hosts (universal forwarders) and all logsseems to be fine except clamav logs. ClamAV scan report has unusual log format (see below). I need help with how to ingest that report. Splunk (splunkd.log) shows error when I try to ingest it. I think, I need to setup a props.conf but I am not sure, how to go about doing it. This is an air gapped system, just FYI.  splunkd.log ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/var/log/audit/clamav_scan_20240916_111846.log). Last time we saw this, filename was different. You may wish to use larger initCrcLen for this sourcetype or a CRC salt on this source. Clamav scan generates log file as shown below: -----------SCAN SUMMARY-------------- Known Viruses: xxxxxx Engine Version: x.xx.x Scanned Directories: xxx Scanned Files: xxxxx Infected Files: x Data Scanned: xxxxMB Data Read: xxxxMB Time: Start Date: 2024:09:16 14:46:58 End Date: 2024:09:16 16:33:06
I have several apps setup to segregate our various products. I’ve added icons to the apps. My issue is the icon is being placed over the app name. It should place the icon next to the app name. For e... See more...
I have several apps setup to segregate our various products. I’ve added icons to the apps. My issue is the icon is being placed over the app name. It should place the icon next to the app name. For example, the Search and Reporting app has the White arrow on a green background to the left of the app name. How do I get the icon to be placed left of the app name?
As a test, I first created some credit card numbers using a python script. I placed the script, along with inputs and props, on the search head. I only placed props on the indexers. The following... See more...
As a test, I first created some credit card numbers using a python script. I placed the script, along with inputs and props, on the search head. I only placed props on the indexers. The following SEDCMD will  mask the 1st and 3rd set of 4-digits. The two groups (2nd and 4th set of 4-digits) will not be masked. props: [cc_generator] SEDCMD-maskcc = s/\d{4}-(\d{4})-\d{4}-(\d{4})/xxxx-\1-xxxx-\2/g inputs: [script://./bin/my_cc_generator.py] interval = */30 * * * * sourcetype = cc_generator disabled = 0 index = mypython output: xxxx-9874-xxxx-9484
I am trying to remove the year from from the time labels on the area chart without it messing up the charts format.  I've tried fieldformat but that would mess up the chart when the new year hap... See more...
I am trying to remove the year from from the time labels on the area chart without it messing up the charts format.  I've tried fieldformat but that would mess up the chart when the new year happens, any help would be great.
I am trying to create a new field called "description" that contains values from two other existing fields.  If field "app" is equal to linux than i want to combine existing fields "host" and "aler... See more...
I am trying to create a new field called "description" that contains values from two other existing fields.  If field "app" is equal to linux than i want to combine existing fields "host" and "alert_type". If field "app" is equal to windows than i want to combine existing field values "host" and "severity" If app equals anything else, i want the value to be false.  Below is the eval i have, buts its not working:   | eval description=if('app'=="linux", host. "-" .alert_type', 'app'==windows, host. "-" .severity, "false")    
Hi can anybody help with this problem, please? source1: lookup Tab (lookup.csv) att1 att2 att3 F1 1100 12.09.2024 F2 1100 23.04.2024 F3 1100 15.06.2024 F4 1100 16.03.2024 att1 is also in index=... See more...
Hi can anybody help with this problem, please? source1: lookup Tab (lookup.csv) att1 att2 att3 F1 1100 12.09.2024 F2 1100 23.04.2024 F3 1100 15.06.2024 F4 1100 16.03.2024 att1 is also in index=myindex I want to have in a table for all att1 from lookup.csv count of all events from index=myindex att1=$att1$ AND earliest=strptime($att3$, "%d.%m.%Y") output: att1 count(from myindex) att2 att3 F1 count 1100 12.09.2024 F2 count 1100 23.04.2024 F3 count 1100 15.06.2024 F4 count 1100 16.03.2024
Is there a way to get Service Endpoint values (response time, load, errors) into Analytics so it can be queried? I have multiple custom service endpoints that are looking at the performance of api c... See more...
Is there a way to get Service Endpoint values (response time, load, errors) into Analytics so it can be queried? I have multiple custom service endpoints that are looking at the performance of api calls from a specific customer.  They are calls like createCart and placeOrder etc. Is there a way for me to get the values like load, response time, and error counts for these service endpoints, in Analytics? I know I can get those metrics for business transactions, but these service endpoints are subsets within the BTs.  I don't want to have to create a custom BT for each of these custom service endpoints if I can avoid that. Thanks, Greg
Hi, Is the Dnslookup available in Splunk cloud like enterprise?
Hi Team  Can you please help me to provide a solution to use a csv file with the external vs internal user id data in the splunk.  Below is the current query and output that extracts the internal... See more...
Hi Team  Can you please help me to provide a solution to use a csv file with the external vs internal user id data in the splunk.  Below is the current query and output that extracts the internal userid and i need another column to add corresponding external user id.  Csv file : ABC.csv usr_id,eml_add_ds internal user id 1 , external user id 1 internal user id 2 , external user id 2 internal user id 3 , external user id 3 internal user id 4 , external user id 4 Query : (index=ABC) ("Start" OR "Finish") Properties.AspNetCoreEnvironment="*" | rex field=Message "Start:\s*(?<start_info>[^\s]+)" | rex field=Message "user\s(?<Userid>[^\took|.]+)" | search start_info=* | table Userid | sort time   Output :   
Hello - I am trying to construct a search whereby I can do a lookup of a single table, then rename the fields and change how they're displayed, however the lookup and eval commands don't seem to be w... See more...
Hello - I am trying to construct a search whereby I can do a lookup of a single table, then rename the fields and change how they're displayed, however the lookup and eval commands don't seem to be working as I would like.  The main search I am performing is basic, using some source subnets and then trying to have the lookup reference what area of the business they belong to, below is the lookup portion of my search: index="logs" sourceip="x.x.x.x" OR destip="x.x.x.x" | lookup file.csv cidr AS sourceip OUTPUT field_a AS sourceprovider, field_b AS sourcearea, field_c AS sourcezone , field_d AS sourceregion, cidr AS src_cidr | lookup file.csv cidr AS destip OUTPUT field_a AS destprovider, field_b AS destarea, field_c AS destzone, field_d AS destregion, cidr AS dest_cidr | fillnull value="none" | eval src_details_combined=sourceprovider."-".sourcearea."-".sourcezone ."-".sourceregion | eval dest_details_combined=destprovider."-".destarea."-".destzone."-".destregion | eval src_details_combined=IF(src_details_combined=="none-none-none-none","notfound",src_details_combined) | eval dest_details_combined=IF(dest_details_combined=="none-none-none-none","notfound",dest_details_combined) | stats count values(sourceip) as sourceip values(destip) as destip by src_details_combined, dest_details_combined, rule, dest_port, app | table src_details_combined, dest_details_combined, app, count   When I run the search I do get some results but the  src_details_combined and dest_details_combined fields always return as "notfound" - even though I know the IPs should match in the lookup csv.  Can anyone see where I have gone wrong in my search?
Hello Splunkers !! I hope all is well. There are some sourcetypes in splunk which are having large amount of data but we are not using those sourcetypes in any of the dashboards or saved searches... See more...
Hello Splunkers !! I hope all is well. There are some sourcetypes in splunk which are having large amount of data but we are not using those sourcetypes in any of the dashboards or saved searches. I want to delete those sourcetypes in splunk and I have some questions associated with the deletion of sourcetype as below. 1. What is the best approach to delete the sourcetypes data in splunk ( using the delete command or from backend ) 2. Does the deletion of historical data from those sourcetypes which impact the other useful sourcetype? 3. Does it impact on the corruption of the buckets ? 4. Unused sourcetypes is carrying millions of data. So what will be the fastest approach to delete the large historical data chunks ? Thanks in advance. Advice and suggestions are really appreciated !!
Hello, I am responsible for providing self-signed SSL certificates for Splunk servers. Could you guide me, considering that I am working in a distributed architecture consisting of SH, 2 INDEXERS ... See more...
Hello, I am responsible for providing self-signed SSL certificates for Splunk servers. Could you guide me, considering that I am working in a distributed architecture consisting of SH, 2 INDEXERS plus a server handling DS, and forwarders? How many certificates will I need to generate, and do the forwarders also require SSL certificates? If possible, I would greatly appreciate it if you could provide any relevant documentation to assist me in this process. Best regards,
Can the Splunkbase app - "Splunk AI assistant for SPL", be installed on an on-prem deployment of Splunk enterprise, or is the app only for public cloud deployments of Splunk Enterprise.  If this is n... See more...
Can the Splunkbase app - "Splunk AI assistant for SPL", be installed on an on-prem deployment of Splunk enterprise, or is the app only for public cloud deployments of Splunk Enterprise.  If this is not for on-prem, are there plans to build and app for on-prem, and what are the current timelines.
We are using v9 format of logs in splunk. It is working fine and we are able to see logs in splunk as expected. We added 4 more fields in transform.conf and test the addon in splunk. Then additiona... See more...
We are using v9 format of logs in splunk. It is working fine and we are able to see logs in splunk as expected. We added 4 more fields in transform.conf and test the addon in splunk. Then additional fields taking the value of s3_filename, bucket name and prefix which are added at the end which is not correct behavior.   We are looking for solution with that we should be able to parse correct value in correct field and the additional fields should have null values if there is no values for them in logs. transform.conf [proxylogs_fields] DELIMS = "," FIELDS = Timestamp,policy_identities,src,src_translated_ip,dest,content_type,action,url,http_referrer,http_user_agent,status,requestSize,responseSize,responseBodySize,sha256,category,av_detection,pua,amp_disposition,amp_malwarename,amp_score,policy_identity_type,blocked_category,identities,identity_type,request_method,dlp_status,certificate_errors,filename,rulesetID,ruleID,destinationListID,isolateAction,fileAction,warnStatus,forwarding_method,Producer,test_feild1,test_field2,test_field3,test_field4,s3_filename,aws_bucket_name,aws_prefix props.conf [cisco:cloud_security:proxy] REPORT-proxylogs-fields = proxylogs_fields,extract_url_domain LINE_BREAKER = ([\r\n]+) # EVENT_BREAKER = ([\r\n]+) # EVENT_BREAKER_ENABLE = true SHOULD_LINEMERGE = false CHARSET = AUTO disabled = false TRUNCATE = 1000000 MAX_EVENTS = 1000000 EVAL-product = "Cisco Secure Access and Umbrella" EVAL-vendor = "Cisco" EVAL-vendor_product = "Cisco Secure Access/Umbrella" MAX_TIMESTAMP_LOOKAHEAD = 22 NO_BINARY_CHECK = true TIME_PREFIX = ^ TIME_FORMAT = "%Y-%m-%d %H:%M:%S" TZ = UTC FIELDALIAS-bytes_in = requestSize as bytes_in FIELDALIAS-bytes_out = responseSize as bytes_out EVAL-action = lower(action) EVAL-app = "Cisco Cloud Security" FIELDALIAS-http_content_type = content_type as http_content_type EVAL-http_user_agent_length = len(http_user_agent) EVAL-url_length = len(url) EVAL-dest = if(isnotnull(dest),dest,url_domain) EVAL-bytes = requestSize + responseSize  
Can`t seem to get my head round this one - I`ve got a table and would like the users to be able to click on a row and to add a Summary comment, but there`s a bug in the code. The comments get submitt... See more...
Can`t seem to get my head round this one - I`ve got a table and would like the users to be able to click on a row and to add a Summary comment, but there`s a bug in the code. The comments get submitted BEFORE I click on the Submit button, which doesn`t seems to work anyway.     <form version="1.1" theme="light" script="TA-images_and-_files:tokenlinks.js"> <label>Report</label> <search> <query>| makeresults|eval Date=strftime(_time,"%d/%m/%Y")|fields - _time</query> <done> <set token="defaut_time">$result.Date$</set> </done> </search> <fieldset submitButton="false"> <input type="dropdown" token="date_tok" searchWhenChanged="true"> <label>Date:</label> <fieldForLabel>Date</fieldForLabel> <fieldForValue>Date</fieldForValue> <search> <query>| makeresults | timechart span=1d count | sort - _time | eval Date=strftime(_time, "%d/%m/%Y"), earliest=relative_time(_time, "@d") | table Date, earliest | head 7 | sort - earliest</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <default>$defaut_time$</default> </input> <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> </input> </fieldset> <row> <panel id="input_panel" depends="$show_input$"> <input type="text" token="Summary"> <label>Summary</label> </input> <input type="text" token="Date"> <label>Date</label> </input> <input type="text" token="Time"> <label>Time</label> </input> <input type="text" token="Shift"> <label>Shift</label> </input> <html> <div> <button type="button" id="buttonId" class="btn btn-primary">Submit</button> <button style="margin-left:10px;" class="btn" data-token-json="{&quot;show_input&quot;: null}">Cancel</button> </div> </html> </panel> </row> <row depends="$hideMe$"> <panel> <table> <search> <done> <unset token="form.Summary"></unset> <unset token="form.Date"></unset> <unset token="form.Time"></unset> <unset token="form.Shift"></unset> <unset token="show_input"></unset> </done> <query>| inputlookup handover_timeline_comments.csv | append [ | makeresults | eval "Summary" = "$form.Summary$", Shift="$form.Shift$", Date="$form.Date$", Time="$form.Time$" ] | outputlookup handover_timeline_comments.csv</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <refresh>30</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table> <search> <query>| makeresults count=24 | eval Date= "$date_tok$", Shift="$shift_tok$" | streamstats count as Time | eval Time=if(Time&lt;10, "0".Time.":00", Time.":00") | eval Time=case( Shift == "Night" AND Time &gt;= "19:00", Time, Shift == "Day" AND Time &gt;= "07:00" AND Time &lt;= "18:00", Time, 1==1, null ) | where isnotnull(Time) | append [ | makeresults count=24 | streamstats count as Time | eval Time=if(Time&lt;10, "0".Time.":00", Time.":00") | table Time | eval Date= "$date_tok$", Shift="$shift_tok$" | eval Time=case( Shift == "Night" AND Time &lt;= "06:00", Time, 1==1, null ) | where isnotnull(Time) ] | eval Summary="" | fields - _time | lookup handover_timeline_comments.csv Date Shift Time OUTPUT Summary | eventstats last(Summary) as Summary by Date Shift Time</query> <earliest>-24h@h</earliest> <latest>now</latest> <refresh>10s</refresh> </search> <option name="count">12</option> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="form.Date">$row.Date$</set> <set token="form.Shift">$row.Shift$</set> <set token="form.Time">$row.Time$</set> <set token="show_input">true</set> </drilldown> </table> </panel> </row> </form>     .js:   requirejs([ '../app/simple_xml_examples/libs/jquery-3.6.0-umd-min', '../app/simple_xml_examples/libs/underscore-1.6.0-umd-min', 'util/console', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, _, console, mvc) { function setToken(name, value) { console.log('Setting Token %o=%o', name, value); var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } $('.dashboard-body').on('click', '[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); var target = $(e.currentTarget); var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value === null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); });    
Splunk support recently upgraded Datapunctum Alert Manager Enterprise to version 3.1.1, but it is now broken.  When I open the app it directs me to complete the update tasks.  When I try to execute t... See more...
Splunk support recently upgraded Datapunctum Alert Manager Enterprise to version 3.1.1, but it is now broken.  When I open the app it directs me to complete the update tasks.  When I try to execute the first task, I get an error saying Internal Server Error and can't progress any further.  The documentation doesn't help with troubleshooting this issue.
Different crashes during tcpout reload. Received fatal signal 6 (Aborted) on PID . Cause: Signal sent by PID running under UID . Crashing thread: indexerPipe_1 Backtrace (PIC build): [0x00... See more...
Different crashes during tcpout reload. Received fatal signal 6 (Aborted) on PID . Cause: Signal sent by PID running under UID . Crashing thread: indexerPipe_1 Backtrace (PIC build): [0x000014BC540AFB8F] gsignal + 271 (libc.so.6 + 0x4EB8F) [0x000014BC54082EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055BCEBEFC1A7] __assert_fail + 135 (splunkd + 0x51601A7) [0x000055BCEBEC4BD9] ? (splunkd + 0x5128BD9) [0x000055BCE9013E72] _ZN34AutoLoadBalancedConnectionStrategyD0Ev + 18 (splunkd + 0x2277E72) [0x000055BCE905DC99] _ZN14TcpOutputGroupD1Ev + 217 (splunkd + 0x22C1C99) [0x000055BCE905E002] _ZN14TcpOutputGroupD0Ev + 18 (splunkd + 0x22C2002) [0x000055BCE905FC6F] _ZN15TcpOutputGroups14checkSendStateEv + 623 (splunkd + 0x22C3C6F) [0x000055BCE9060F08] _ZN15TcpOutputGroups4sendER15CowPipelineData + 88 (splunkd + 0x22C4F08) [0x000055BCE90002FA] _ZN18TcpOutputProcessor7executeER15CowPipelineData + 362 (splunkd + 0x22642FA) [0x000055BCE9829628] _ZN9Processor12executeMultiER18PipelineDataVectorPS0_ + 72 (splunkd + 0x2A8D628) [0x000055BCE8D29D25] _ZN8Pipeline4mainEv + 1157 (splunkd + 0x1F8DD25) [0x000055BCEBF715EE] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x51D55EE) [0x000055BCEBF716FB] _ZN6Thread8callMainEPv + 139 (splunkd + 0x51D56FB) [0x000014BC552AC1DA] ? (libpthread.so.0 + 0x81DA) Another reload crash Backtrace (PIC build): [0x00007F456828700B] gsignal + 203 (libc.so.6 + 0x2100B) [0x00007F4568266859] abort + 299 (libc.so.6 + 0x859) [0x0000560602B5B4B7] __assert_fail + 135 (splunkd + 0x5AAA4B7) [0x00005605FF66297A] _ZN15TcpOutputClientD1Ev + 3130 (splunkd + 0x25B197A) [0x00005605FF6629F2] _ZN15TcpOutputClientD0Ev + 18 (splunkd + 0x25B19F2) [0x0000560602AD7807] _ZN9EventLoop3runEv + 839 (splunkd + 0x5A26807) [0x00005605FF3555AD] _ZN11Distributed11EloopRunner4mainEv + 205 (splunkd + 0x22A45AD) [0x0000560602BD03FE] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x5B1F3FE) [0x0000560602BD050B] _ZN6Thread8callMainEPv + 139 (splunkd + 0x5B1F50B) [0x00007F4568CAD609] ? (libpthread.so.0 + 0x2609) [0x00007F4568363353] clone + 67 (libc.so.6 + 0xFD353) Linux / myhost / 5.15.0-1055-aws / #60~20.04.1-Ubuntu SMP Thu assertion_failure="!_hasDataInTransit" assertion_function="virtual TcpOutputClient::~TcpOutputClient()"   Starting Splunk 9.2, splunk outputs.conf is reloadable. Whenever DC pulls bundle from DS, depending on the changes, during reload, conf files are reloaded. One of the conf file is outputs.conf. Prior to 9.2 outputs.conf was not reloadable that means hitting following endpoint would do nothing. /data/outputs/tcp/server or  https://<host>:<port>/servicesNS/-/-/admin/tcpout-group/_reload Behavior is changed from 9.2 and now outputs.conf is reloadable. However reloading outputs.conf is very complex process as it involves shutdown tcpout groups safely. Still there are cases where splunk crashes. We are working on fixing reported crashes. NOTE: (Splunkcloud and others), following workaround is NOT for a crash caused by  /debug/refresh induced forced reload.  There is no workaround available for a crash caused by /debug/refresh, except not to use /debug/refresh. Workaround As mentioned before 9.2 outputs.conf was never reloadable ( no-op for _reload), thus no crashes/complications Set in local/apps.conf as a workaround. [triggers] reload.outputs = simple With setting above, splunk will take no action on tcpout(outputs.conf) reload( a behavior  that was before 9.2)   If outputs.conf is changed via DS, restart splunk.
Is there any documentation in Splunk's documentation to guide a load balancer administrator on configuring the load balancer in front of intermediate forwarders to receive syslog traffic from securit... See more...
Is there any documentation in Splunk's documentation to guide a load balancer administrator on configuring the load balancer in front of intermediate forwarders to receive syslog traffic from security devices on port 514?