All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have installed eventid.net but it keeps saying it is not configured yet.  What am i missing in the config.  I attached screenshots to assist.
Hi ,   We are getting lot of events into our Splunk which is filling up the our computer disk storage rapidly.   Our search query is “index="cloud" sourcetype=sls_sourcetype”     Upon i... See more...
Hi ,   We are getting lot of events into our Splunk which is filling up the our computer disk storage rapidly.   Our search query is “index="cloud" sourcetype=sls_sourcetype”     Upon investigating we decided that we need to capture only below events; Collect only below events Where field name “sql” contains below 4 values ;   logout! login success! login failed! create   Actually field name “sql” contains more than 100 values so we need to exclude all of them and capture only those events which has mentioned above 4 values. We tried below configurations in transforms.conf and props.conf files but none of them are giving desired results. Can some please help us with the correct settings as this is the first time we are working transforms.conf and props.conf settings.     transforms.conf  as below;   [setnull] SOURCE_KEY=_raw REGEX = * DEST_KEY = queue FORMAT = nullQueue [setqueue] SOURCE_KEY=_raw REGEX = sql=log(out! |in success! |in failure!) DEST_KEY = queue FORMAT = indexQueue   Props.conf as below;   [sourcetype::sls_sourcetype] TRANSFORMS-set= setnull,setparsing
Hi All, I need to collect "Thread Dump" and "Heap Dump" of the application into Splunk.  What are all the possibilities to achieve it?
Splunk app inspect reports the "check_for_supported_tls" failure with the description as  -  If you are using requests.post to talk to your own infra with non-public PKI, make sure you bundle your o... See more...
Splunk app inspect reports the "check_for_supported_tls" failure with the description as  -  If you are using requests.post to talk to your own infra with non-public PKI, make sure you bundle your own CA certs as part of your app and pass the path into requests.post as an arg. I am using verify: false in the request.post() method and getting the above error in the app inspect tool.
I have Splunk UF 7.0.3 that I want to send logs from to Splunk Cloud.  However, the UF doesn't support httpout so I am using an intermediate forwarder.  Can someone give me the input and output f... See more...
I have Splunk UF 7.0.3 that I want to send logs from to Splunk Cloud.  However, the UF doesn't support httpout so I am using an intermediate forwarder.  Can someone give me the input and output files of the intermediate forwarder and the output file to send to the intermediate forwarder?   
Good day experts, to manage the ingestion volume, I need apply truncation to a source that sends pretty high volume of data. However, we do not wish to truncate all events from this source, only cert... See more...
Good day experts, to manage the ingestion volume, I need apply truncation to a source that sends pretty high volume of data. However, we do not wish to truncate all events from this source, only certain events which are less critical. I tried to override sourcetype in transforms.conf with a regex that matches less critical events and applying truncate in props.conf to the custom sourcetype but failed. Only then I recalled that the transforms and props apply at indexing time. The transforms worked as expected to change the sourcetype but it did not truncate.  Does anyone faced similar use case or can share a way to manage this? Thank you in advance.
Searches Delayed Root Cause(s): The percentage of non high priority searches delayed (22%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. To... See more...
Searches Delayed Root Cause(s): The percentage of non high priority searches delayed (22%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=147626. Total delayed Searches=32735 How we can permanently solve this issue
Hi all, I am trying to find a way to use Rest API like search endpoint for splunk but my problem is my company use Okta application to login to splunk. So I can't have username or password for splun... See more...
Hi all, I am trying to find a way to use Rest API like search endpoint for splunk but my problem is my company use Okta application to login to splunk. So I can't have username or password for splunk so is there any ways to bypass this auth ?
I have a JSON file I am trying to search for a specific value - EventType=GoodMail - and then pull the values from another field - {}.MessageCount. I have the following search to pull back the EventT... See more...
I have a JSON file I am trying to search for a specific value - EventType=GoodMail - and then pull the values from another field - {}.MessageCount. I have the following search to pull back the EventType of just GoodMail:   index="mail_reports" | spath | mvexpand "{}.EventType" | search {}.EventType=GoodMail   But if I add this on to the end of the search:   | stats values "{}.MessageCount"   I get - "Error in 'stats' command: The argument '{}.MessageCount' is invalid." How do I modify the search to pull back the values for {}.MessageCount'? Thx
splunk receives 2 different stream data sets on a single hec (json). set 1 has call records set 2 has call status/disposition so if i want call detail information from set 1 on calls that m... See more...
splunk receives 2 different stream data sets on a single hec (json). set 1 has call records set 2 has call status/disposition so if i want call detail information from set 1 on calls that meet criteria in set 2, i have to join the records. i used to use 'join' but read several articles about other ways and came across this method which I like, but really feels so slow/heavy     index="myindex" resource="somefilter" | stats values(*) as * by guid | search column="terminated"     because we have millions of rows to search from and i'm just looking for a few. I tried adding my search criteria higher up, like this:     index="myindex" resource="somefilter" column="terminated" | stats values(*) as * by guid     but then the other columns come back empty (I presume because it filtered them out, so nothing to join). So looking for another/faster/better way to: 1. get data from set 2 with criteria X 2. bring back matches of that data from set 1. Always many thanks for the education!
(Novice) Is there a way to identify uniquely the information that is being sent to a single indexer from multiple forwarders in separate environments?  Each environment is a mirror of the other.  The... See more...
(Novice) Is there a way to identify uniquely the information that is being sent to a single indexer from multiple forwarders in separate environments?  Each environment is a mirror of the other.  They all have the same IPs and hostnames; including the forwarders. Maybe there is a tag the forwarder can apply or something that makes them unique?
I am observing intermittent issues parsing IIS data.  Splunk is configured for index time parsing of IIS events on the universal forwarders (INDEXED_EXCTRACTIONS).  The extraction works fine for most... See more...
I am observing intermittent issues parsing IIS data.  Splunk is configured for index time parsing of IIS events on the universal forwarders (INDEXED_EXCTRACTIONS).  The extraction works fine for most events, but a small percentage (less than 1%) fail parsing. I am detecting the events that fail parsing with the following SPL index=[IIS INDEXES] sourcetype=iis NOT c_ip=* I have noticed an error in the splunkd.log on the universal forwarders that accounts for some of these issues. 04-06-2022 20:08:42.602 -0500 WARN CsvLineBreaker - Parser warning: Encountered unescaped quotation mark in field while parsing. This may cause inaccurate field extractions or corrupt/merged events. - data_source="e:\iis-logs\W3SVC1\u_ex220407.log", data_host="XXXXX", data_sourcetype="iis" In these cases, it appears that not only does index time field parsing fail but event breaking fails resulting many events getting lumped into a single event.  This may not be avoidable and we’re at least able to point to a cause for these issues but many more are unexplained. For most of the events that fail parsing the result is a single line event which appears to be formatted correctly but has no indexed fields.  I was originally having an issue with these events reporting in the future as well but adding a time zone to props.conf seems to have at least resolved that issue. I have upgraded through several versions (8.1.2, 8.2.3, 8.2.7.1) on the Universal forwarders and have seen this issue across all these versions. If you have and ideas on what might be causing failures in index time parsing issues for IIS data I would love to hear them.
Working on a SplunkCloud environment - we always keep things tidy by re-assigning ownership of KOs to either Nobody or the correct user with correct role. But why can't we do this for Calculated Fie... See more...
Working on a SplunkCloud environment - we always keep things tidy by re-assigning ownership of KOs to either Nobody or the correct user with correct role. But why can't we do this for Calculated Fields?
Hi guys, I'm trying to show a table where in one colums there will be some icons based on the cell value. I used the examples provvided from the Splunk dashboard example App.  the problem I'... See more...
Hi guys, I'm trying to show a table where in one colums there will be some icons based on the cell value. I used the examples provvided from the Splunk dashboard example App.  the problem I'm experiencing is that if i load the dashboard the table is rendered with the string , but if i switch to edit mode then the table is rendered correctly with the icons.  Reloading the page will render the table without icons and anytime i go trough the edit mode the table renders with icons.       <dashboard version="1.1" stylesheet="css/test.css" script="js/test.js"> <label>test</label> <row> <panel> <table id="table1"> <search> <query>| inputcsv thermal_structure.csv | eval range = case(status=="ok","low",status=="warning","elevated",status=="no","severe") | table zone,range</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </dashboard> /* Custom Icons */ td.icon { text-align: center; } td.icon i { font-size: 25px; text-shadow: 1px 1px #aaa; } td.icon .severe { color: red; } td.icon .elevated { color: orangered; } td.icon .low { color: #006400; } /* Row Coloring */ #highlight tr td { background-color: #c1ffc3 !important; } #highlight tr.range-elevated td { background-color: #ffc57a !important; } #highlight tr.range-severe td { background-color: #d59392 !important; } #highlight .table td { border-top: 1px solid #fff; } #highlight td.range-severe, td.range-elevated { font-weight: bold; } .icon-inline i { font-size: 18px; margin-left: 5px; } .icon-inline i.icon-alert-circle { color: #ef392c; } .icon-inline i.icon-alert { color: #ff9c1a; } .icon-inline i.icon-check { color: #5fff5e; } /* Dark Theme */ td.icon i.dark { text-shadow: none; } /* Row Coloring */ #highlight tr.dark td { background-color: #5BA383 !important; } #highlight tr.range-elevated.dark td { background-color: #EC9960 !important; } #highlight tr.range-severe.dark td { background-color: #AF575A !important; } #highlight .table .dark td { border-top: 1px solid #000000; color: #F2F4F5; } requirejs([ '../app/simple_xml_examples/libs/underscore-1.6.0-umd-min', '../app/simple_xml_examples/theme_utils', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, themeUtils, mvc, TableView) { // Translations from rangemap results to CSS class var ICONS = { severe: 'alert-circle', elevated: 'alert', low: 'check-circle' }; var RangeMapIconRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { // Only use the cell renderer for the range field return cell.field === 'range'; }, render: function($td, cell) { var icon = 'question'; var isDarkTheme = themeUtils.getCurrentTheme() === 'dark'; // Fetch the icon for the value // eslint-disable-next-line no-prototype-builtins if (ICONS.hasOwnProperty(cell.value)) { icon = ICONS[cell.value]; } // Create the icon element and add it to the table cell $td.addClass('icon').html(_.template('<i class="icon-<%-icon%> <%- range %> <%- isDarkTheme %>" title="<%- range %>"></i>', { icon: icon, range: cell.value, isDarkTheme: isDarkTheme ? 'dark' : '' })); } }); mvc.Components.get('table1').getVisualization(function(tableView) { // Register custom cell renderer, the table will re-render automatically tableView.addCellRenderer(new RangeMapIconRenderer()); }); });        
I have a problem with my credentials "Splunk.com" when I attempt to install the app "Splunk add-ons for Microsoft Windows". The message error says that the credentials are incorrect, but I probed the... See more...
I have a problem with my credentials "Splunk.com" when I attempt to install the app "Splunk add-ons for Microsoft Windows". The message error says that the credentials are incorrect, but I probed the credentials in my browser, and yes are correct. I reset my password to the web site splunk.com, but still, I can't logon to install the app in Splunk.   Can you help me with this problem, please?    
Hello, I am very new to splunk and would greatly appreciate any advice. I have a collection of reports that each contain the following fields: BM3_SGA_TGT_SIZE BM2_SGA_TGT_SIZE BM1_SGA_TGT_SIZE... See more...
Hello, I am very new to splunk and would greatly appreciate any advice. I have a collection of reports that each contain the following fields: BM3_SGA_TGT_SIZE BM2_SGA_TGT_SIZE BM1_SGA_TGT_SIZE B_SGA_TGT_SIZE BP1_SGA_TGT_SIZE BP2_SGA_TGT_SIZE BP3_SGA_TGT_SIZE BM3_EST_PHYREAD BM2_EST_PHYREAD BM1_EST_PHYREAD B_EST_PHYREAD BP1_EST_PHYREAD BP2_EST_PHYREAD BP3_EST_PHYREAD BM3_EST_DBTIME BM2_EST_DBTIME BM1_EST_DBTIME B_EST_DBTIME BP1_EST_DBTIME BP2_EST_DBTIME BP3_EST_DBTIME Each report has been added to splunk as an event. I would like to create a single scatter that aggregates the data across all the events and plots according to the following: (1st Y-Axis) BM3_EST_PHYREAD and (2nd Y-Axis)BM3_EST_DBTIME over (X-axis) BM3_SGA_TGT_SIZE (1st Y-Axis) BM2_EST_PHYREAD and (2nd Y-Axis)BM2_EST_DBTIME over (X-axis) BM2_SGA_TGT_SIZE (1st Y-Axis) BM1_EST_PHYREAD and (2nd Y-Axis)BM1_EST_DBTIME over (X-axis) BM1_SGA_TGT_SIZE (1st Y-Axis) B_EST_PHYREAD and (2nd Y-Axis)B_EST_DBTIME over (X-axis) B_SGA_TGT_SIZE (1st Y-Axis) BP1_EST_PHYREAD and (2nd Y-Axis)BP1_EST_DBTIME over (X-axis) BP1_SGA_TGT_SIZE (1st Y-Axis) BP2_EST_PHYREAD and (2nd Y-Axis)BP2_EST_DBTIME over (X-axis) BP2_SGA_TGT_SIZE (1st Y-Axis) BP3_EST_PHYREAD and (2nd Y-Axis)BP3_EST_DBTIME over (X-axis) BP3_SGA_TGT_SIZE The values and order of TGT_SIZE will be consistent across all events. The values of PYHREAD and DBTIME will vary across events. Thus far I have created the following search query:       index=oraawr sourcetype="awr_general_stats" CUSTOMER_NAME="U*" DB_NAME="SA*" INSTANCE_NAME="sa*1" SNAP_ID=* | eval TGT_SIZE=mvappend(BM3_SGA_TGT_SIZE,BM2_SGA_TGT_SIZE,BM1_SGA_TGT_SIZE,B_SGA_TGT_SIZE,BP1_SGA_TGT_SIZE,BP2_SGA_TGT_SIZE,BP3_SGA_TGT_SIZE) | eval EST_PHYREAD=mvappend(BM3_EST_PHYREAD,BM2_EST_PHYREAD,BM1_EST_PHYREAD,B_EST_PHYREAD,BP1_EST_PHYREAD,BP2_EST_PHYREAD,BP3_EST_PHYREAD) | eval EST_DBTIME=mvappend(BM3_EST_DBTIME,BM2_EST_DBTIME,BM1_EST_DBTIME,B_EST_DBTIME,BP1_EST_DBTIME,BP2_EST_DBTIME,BP3_EST_DBTIME) | table TGT_SIZE EST_PHYREAD EST_DBTIME         It returns the following table: TGT_SIZE EST_PHYREAD EST_PHYREAD 4280 5136 5992 6848 7704 8560 9416 153575927 87554052 58277513 47511424 42042859 38764571 38764571 521192 460130 440788 433676 430033 427867 427433 4280 5136 5992 6848 7704 8560 9416 146642502 78447471 57377856 45913304 41634184 38158547 36836244 505028 448212 434136 426544 423644 421383 420402 4280 5136 5992 6848 7704 8560 9416 115711228 68383308 53069056 45564571 40324645 37909723 36597463 480300 441426 424928 419806 416406 414684 413761 4280 5136 5992 6848 7704 8560 9416 107453756 62497844 48506164 41866187 38114977 35611379 33961851 466344 427121 417780 413316 410795 409100 407984 ......<continue>..... Each row corresponds to a single event.  In this particular case there are 93 events, so there are 93 rows.   My initial thought was to consolidate  the rows.  So instead of 93x 7 values of TGT_SIZE I would have 691 rows, keeping the order.  Do the same for EST_PHYREAD and EST_DBTIME. But I can't seem to figure out the correct commands to use. I guidance would be greatly appreciated. Thanks,
I am newbie in Splunk. I need help help creating a report to show new log sources that have been added to Splunk.
please help extract adsId,offerName, currentProductDescription, offerAccountToken, offerType, offerIdentifier message={"name":"com. ","level":"info","message":"Create -->|Request identifier : 09accf... See more...
please help extract adsId,offerName, currentProductDescription, offerAccountToken, offerType, offerIdentifier message={"name":"com. ","level":"info","message":"Create -->|Request identifier : 09accf30-6cf7-4e4f-a633-c19808eff766|CreateAccountOfferEnrollment.v1|REQUEST ---> {\"correlationId\":\"09accf30-6cf7-4e4f-a633-c19808eff766\",\"ccpId\":\"HA6952B\",\"callId\":\"0109\",\"adsId\":\"camar\",\"customerId\":\"63038\",\"eventType\":\"CVP-INSTANT\",\"channelIdentifier\":\"CVP\",\"lineOfBusiness\":\"CCSG\",\"offerName\":\"Additional\",\"offerIdentifier\":\"A000\",\"sourceProductCode\":\"2X\",\"currentProductIdentifier\":\"2X\",\"currentProductDescription\":\"Pl\",\"destinationProductCode\":\"2X\",\"destinationProductName\":\"Plat\",\"fulfillmentCode\":\"GNAS\",\"requestHasSupps\":true,\"offerType\":\"consumer-stand-alone-supp\",\"offerAccountToken\":\"YAS\",\"marketName\":\"US\",\"numberOfSupps\":1,\"calledInAccountToken\":\"YAS\",\"fullName\":{\"firstName\":\"M\",\"lastName\":\"C\",\"middleName\":\"A\",\"prefix\":\"\",\"suffix\":\"\"},\"communicationInformation\":{\"channel\":\"EMAIL\",\"communicationVariables\":[],\"locale\":\"en_US\",\"physicalAddress\":{\"city\":\"P\",\"state\":\"FL\",\"zipCode\":\"33\",\"lines\":[\"48 \",\"#0114\",\"\"]},\"emailAddress\":\"cru@gmail.com\",\"isoCountryCode\":\"840\"},\"enrollmentInformation\":{\"id\":\"2023\",\"is_customer_offline\":false,\"channel_received_datetime\":\"20230109T171713.842 GMT\",\"dynamic_journey\":\"DYNAMIC_INSTANT\",\"rep_id\":\"HA6952B\",\"country_code\":\"840\",\"journey\":\"INSTANT_DECISION\",\"journey_stage\":\"SUPP",\"applicants\":[{\"number\":0,\"amex_relationship\":{\"relationships\":[{\"number\":\"3726\",\"type\""CARD\"}]},\"type\""PRIMARY\"},{\"number\":1,\"type\""NONPRIMARY\",\"has_spending_limit\":false,\"is_signature_available\":true,\"has_cash_restriction\":false,\"experience_id\""829e34d6-e89f-422b-b355-811b1aa2c79c\",\"names\":[{\"language\""EN\",\"name\":{\"first\""V\",\"last\""C\"}}],\"identifiers\":[{\"system\""DELIVERY_METHOD_IDENTIFIER\",\"id\""510DELVIDP256Cn+ has_same_address_as_primary\":false,\"emboss_name\""V\",\"language\""EN\",\"birth_date\""19\",\"spending_limit\":0,\"experience_choices\":[{\"selected_id\""USA_CONSUMER \",\"feature_name\""CARD_DESIGN\"}],\"product\":{\"offer_arrangement_id\""de7c960c46c7\",\"source_code\""A0000FYC4T\",\"short_product_id\""L81\",\"sub_product_code\""2X\"},\"addresses\":[{\"type\""HOME\",\"address\":{\"line1\""7B\",\"city\""HOUSTON\",\"region\""TX\",\"postal_code\""77028-4570\",\"country\""840\"}},{\"type\""TEMPORARY_ADDRESS\",\"address\":{\"line1\""790\",\"city\""HO\",\"region\""T\",\"postal_code\""77\",\"country\""840\"}}]}]},\"misProcessId\""3016428984\"}"}     @ITWhisperer @VatsalJagani please help
2023-01-09T16:46:00.780076351Z app_name=default-java environment=e3 ns=one pod_container=default-java pod_name=default stream=stdout message={"name":"com","timestamp":"2023-01-09T16:46:00.779Z","leve... See more...
2023-01-09T16:46:00.780076351Z app_name=default-java environment=e3 ns=one pod_container=default-java pod_name=default stream=stdout message={"name":"com","timestamp":"2023-01-09T16:46:00.779Z","level":"info","schemaVersion":"0.1","application":{"name":"com ","version":"1.2.5"},"request":{"address":{"uri":"Read/1.2.5"},"metadata":{"one-data-correlation-id":"d5d3 ","one-data-trace-id":"0be"}},"message":"Parent Function Address: Read, Request identifier: d5d35c6e-3661-4445-bbe4-f5a3f382d035, REQUEST-RECEIVED: {\"requestIdentifier\""d5 \",\"clientIdentifier\""CUST \",\"locale\""en-US\",\"userId\""lkapla\",\"accountNumber\""1234\",\"treatmentsFilter\":[\"targeted\",\"messages\"],\"callerType\""ADDTL\",\"cancelType\""\",\"handle\""gsp00a79e6b_b610_3407_90fa_11d5417c0b7f\",\"callTimeStamp\""1/9/2023 9:46:00 AM\",\"callIdentifier\""01091\",\"geoTelIdentifier\""04ba\"}, "}   I want to extract the time, userid and  clientIdentifier in a table?  
Splunk app inspect reports the "check_for_supported_tls" failure with the description as  -  If you are using requests.post to talk to your own infra with non-public PKI, make sure you bundle your ... See more...
Splunk app inspect reports the "check_for_supported_tls" failure with the description as  -  If you are using requests.post to talk to your own infra with non-public PKI, make sure you bundle your own CA certs as part of your app and pass the path into requests.post as an arg. I am using verify: false in the request.post() method and getting the above error in the app inspect tool.