All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am observing intermittent issues parsing IIS data.  Splunk is configured for index time parsing of IIS events on the universal forwarders (INDEXED_EXCTRACTIONS).  The extraction works fine for most... See more...
I am observing intermittent issues parsing IIS data.  Splunk is configured for index time parsing of IIS events on the universal forwarders (INDEXED_EXCTRACTIONS).  The extraction works fine for most events, but a small percentage (less than 1%) fail parsing. I am detecting the events that fail parsing with the following SPL index=[IIS INDEXES] sourcetype=iis NOT c_ip=* I have noticed an error in the splunkd.log on the universal forwarders that accounts for some of these issues. 04-06-2022 20:08:42.602 -0500 WARN CsvLineBreaker - Parser warning: Encountered unescaped quotation mark in field while parsing. This may cause inaccurate field extractions or corrupt/merged events. - data_source="e:\iis-logs\W3SVC1\u_ex220407.log", data_host="XXXXX", data_sourcetype="iis" In these cases, it appears that not only does index time field parsing fail but event breaking fails resulting many events getting lumped into a single event.  This may not be avoidable and we’re at least able to point to a cause for these issues but many more are unexplained. For most of the events that fail parsing the result is a single line event which appears to be formatted correctly but has no indexed fields.  I was originally having an issue with these events reporting in the future as well but adding a time zone to props.conf seems to have at least resolved that issue. I have upgraded through several versions (8.1.2, 8.2.3, 8.2.7.1) on the Universal forwarders and have seen this issue across all these versions. If you have and ideas on what might be causing failures in index time parsing issues for IIS data I would love to hear them.
Working on a SplunkCloud environment - we always keep things tidy by re-assigning ownership of KOs to either Nobody or the correct user with correct role. But why can't we do this for Calculated Fie... See more...
Working on a SplunkCloud environment - we always keep things tidy by re-assigning ownership of KOs to either Nobody or the correct user with correct role. But why can't we do this for Calculated Fields?
Hi guys, I'm trying to show a table where in one colums there will be some icons based on the cell value. I used the examples provvided from the Splunk dashboard example App.  the problem I'... See more...
Hi guys, I'm trying to show a table where in one colums there will be some icons based on the cell value. I used the examples provvided from the Splunk dashboard example App.  the problem I'm experiencing is that if i load the dashboard the table is rendered with the string , but if i switch to edit mode then the table is rendered correctly with the icons.  Reloading the page will render the table without icons and anytime i go trough the edit mode the table renders with icons.       <dashboard version="1.1" stylesheet="css/test.css" script="js/test.js"> <label>test</label> <row> <panel> <table id="table1"> <search> <query>| inputcsv thermal_structure.csv | eval range = case(status=="ok","low",status=="warning","elevated",status=="no","severe") | table zone,range</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </dashboard> /* Custom Icons */ td.icon { text-align: center; } td.icon i { font-size: 25px; text-shadow: 1px 1px #aaa; } td.icon .severe { color: red; } td.icon .elevated { color: orangered; } td.icon .low { color: #006400; } /* Row Coloring */ #highlight tr td { background-color: #c1ffc3 !important; } #highlight tr.range-elevated td { background-color: #ffc57a !important; } #highlight tr.range-severe td { background-color: #d59392 !important; } #highlight .table td { border-top: 1px solid #fff; } #highlight td.range-severe, td.range-elevated { font-weight: bold; } .icon-inline i { font-size: 18px; margin-left: 5px; } .icon-inline i.icon-alert-circle { color: #ef392c; } .icon-inline i.icon-alert { color: #ff9c1a; } .icon-inline i.icon-check { color: #5fff5e; } /* Dark Theme */ td.icon i.dark { text-shadow: none; } /* Row Coloring */ #highlight tr.dark td { background-color: #5BA383 !important; } #highlight tr.range-elevated.dark td { background-color: #EC9960 !important; } #highlight tr.range-severe.dark td { background-color: #AF575A !important; } #highlight .table .dark td { border-top: 1px solid #000000; color: #F2F4F5; } requirejs([ '../app/simple_xml_examples/libs/underscore-1.6.0-umd-min', '../app/simple_xml_examples/theme_utils', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, themeUtils, mvc, TableView) { // Translations from rangemap results to CSS class var ICONS = { severe: 'alert-circle', elevated: 'alert', low: 'check-circle' }; var RangeMapIconRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { // Only use the cell renderer for the range field return cell.field === 'range'; }, render: function($td, cell) { var icon = 'question'; var isDarkTheme = themeUtils.getCurrentTheme() === 'dark'; // Fetch the icon for the value // eslint-disable-next-line no-prototype-builtins if (ICONS.hasOwnProperty(cell.value)) { icon = ICONS[cell.value]; } // Create the icon element and add it to the table cell $td.addClass('icon').html(_.template('<i class="icon-<%-icon%> <%- range %> <%- isDarkTheme %>" title="<%- range %>"></i>', { icon: icon, range: cell.value, isDarkTheme: isDarkTheme ? 'dark' : '' })); } }); mvc.Components.get('table1').getVisualization(function(tableView) { // Register custom cell renderer, the table will re-render automatically tableView.addCellRenderer(new RangeMapIconRenderer()); }); });        
I have a problem with my credentials "Splunk.com" when I attempt to install the app "Splunk add-ons for Microsoft Windows". The message error says that the credentials are incorrect, but I probed the... See more...
I have a problem with my credentials "Splunk.com" when I attempt to install the app "Splunk add-ons for Microsoft Windows". The message error says that the credentials are incorrect, but I probed the credentials in my browser, and yes are correct. I reset my password to the web site splunk.com, but still, I can't logon to install the app in Splunk.   Can you help me with this problem, please?    
Hello, I am very new to splunk and would greatly appreciate any advice. I have a collection of reports that each contain the following fields: BM3_SGA_TGT_SIZE BM2_SGA_TGT_SIZE BM1_SGA_TGT_SIZE... See more...
Hello, I am very new to splunk and would greatly appreciate any advice. I have a collection of reports that each contain the following fields: BM3_SGA_TGT_SIZE BM2_SGA_TGT_SIZE BM1_SGA_TGT_SIZE B_SGA_TGT_SIZE BP1_SGA_TGT_SIZE BP2_SGA_TGT_SIZE BP3_SGA_TGT_SIZE BM3_EST_PHYREAD BM2_EST_PHYREAD BM1_EST_PHYREAD B_EST_PHYREAD BP1_EST_PHYREAD BP2_EST_PHYREAD BP3_EST_PHYREAD BM3_EST_DBTIME BM2_EST_DBTIME BM1_EST_DBTIME B_EST_DBTIME BP1_EST_DBTIME BP2_EST_DBTIME BP3_EST_DBTIME Each report has been added to splunk as an event. I would like to create a single scatter that aggregates the data across all the events and plots according to the following: (1st Y-Axis) BM3_EST_PHYREAD and (2nd Y-Axis)BM3_EST_DBTIME over (X-axis) BM3_SGA_TGT_SIZE (1st Y-Axis) BM2_EST_PHYREAD and (2nd Y-Axis)BM2_EST_DBTIME over (X-axis) BM2_SGA_TGT_SIZE (1st Y-Axis) BM1_EST_PHYREAD and (2nd Y-Axis)BM1_EST_DBTIME over (X-axis) BM1_SGA_TGT_SIZE (1st Y-Axis) B_EST_PHYREAD and (2nd Y-Axis)B_EST_DBTIME over (X-axis) B_SGA_TGT_SIZE (1st Y-Axis) BP1_EST_PHYREAD and (2nd Y-Axis)BP1_EST_DBTIME over (X-axis) BP1_SGA_TGT_SIZE (1st Y-Axis) BP2_EST_PHYREAD and (2nd Y-Axis)BP2_EST_DBTIME over (X-axis) BP2_SGA_TGT_SIZE (1st Y-Axis) BP3_EST_PHYREAD and (2nd Y-Axis)BP3_EST_DBTIME over (X-axis) BP3_SGA_TGT_SIZE The values and order of TGT_SIZE will be consistent across all events. The values of PYHREAD and DBTIME will vary across events. Thus far I have created the following search query:       index=oraawr sourcetype="awr_general_stats" CUSTOMER_NAME="U*" DB_NAME="SA*" INSTANCE_NAME="sa*1" SNAP_ID=* | eval TGT_SIZE=mvappend(BM3_SGA_TGT_SIZE,BM2_SGA_TGT_SIZE,BM1_SGA_TGT_SIZE,B_SGA_TGT_SIZE,BP1_SGA_TGT_SIZE,BP2_SGA_TGT_SIZE,BP3_SGA_TGT_SIZE) | eval EST_PHYREAD=mvappend(BM3_EST_PHYREAD,BM2_EST_PHYREAD,BM1_EST_PHYREAD,B_EST_PHYREAD,BP1_EST_PHYREAD,BP2_EST_PHYREAD,BP3_EST_PHYREAD) | eval EST_DBTIME=mvappend(BM3_EST_DBTIME,BM2_EST_DBTIME,BM1_EST_DBTIME,B_EST_DBTIME,BP1_EST_DBTIME,BP2_EST_DBTIME,BP3_EST_DBTIME) | table TGT_SIZE EST_PHYREAD EST_DBTIME         It returns the following table: TGT_SIZE EST_PHYREAD EST_PHYREAD 4280 5136 5992 6848 7704 8560 9416 153575927 87554052 58277513 47511424 42042859 38764571 38764571 521192 460130 440788 433676 430033 427867 427433 4280 5136 5992 6848 7704 8560 9416 146642502 78447471 57377856 45913304 41634184 38158547 36836244 505028 448212 434136 426544 423644 421383 420402 4280 5136 5992 6848 7704 8560 9416 115711228 68383308 53069056 45564571 40324645 37909723 36597463 480300 441426 424928 419806 416406 414684 413761 4280 5136 5992 6848 7704 8560 9416 107453756 62497844 48506164 41866187 38114977 35611379 33961851 466344 427121 417780 413316 410795 409100 407984 ......<continue>..... Each row corresponds to a single event.  In this particular case there are 93 events, so there are 93 rows.   My initial thought was to consolidate  the rows.  So instead of 93x 7 values of TGT_SIZE I would have 691 rows, keeping the order.  Do the same for EST_PHYREAD and EST_DBTIME. But I can't seem to figure out the correct commands to use. I guidance would be greatly appreciated. Thanks,
I am newbie in Splunk. I need help help creating a report to show new log sources that have been added to Splunk.
please help extract adsId,offerName, currentProductDescription, offerAccountToken, offerType, offerIdentifier message={"name":"com. ","level":"info","message":"Create -->|Request identifier : 09accf... See more...
please help extract adsId,offerName, currentProductDescription, offerAccountToken, offerType, offerIdentifier message={"name":"com. ","level":"info","message":"Create -->|Request identifier : 09accf30-6cf7-4e4f-a633-c19808eff766|CreateAccountOfferEnrollment.v1|REQUEST ---> {\"correlationId\":\"09accf30-6cf7-4e4f-a633-c19808eff766\",\"ccpId\":\"HA6952B\",\"callId\":\"0109\",\"adsId\":\"camar\",\"customerId\":\"63038\",\"eventType\":\"CVP-INSTANT\",\"channelIdentifier\":\"CVP\",\"lineOfBusiness\":\"CCSG\",\"offerName\":\"Additional\",\"offerIdentifier\":\"A000\",\"sourceProductCode\":\"2X\",\"currentProductIdentifier\":\"2X\",\"currentProductDescription\":\"Pl\",\"destinationProductCode\":\"2X\",\"destinationProductName\":\"Plat\",\"fulfillmentCode\":\"GNAS\",\"requestHasSupps\":true,\"offerType\":\"consumer-stand-alone-supp\",\"offerAccountToken\":\"YAS\",\"marketName\":\"US\",\"numberOfSupps\":1,\"calledInAccountToken\":\"YAS\",\"fullName\":{\"firstName\":\"M\",\"lastName\":\"C\",\"middleName\":\"A\",\"prefix\":\"\",\"suffix\":\"\"},\"communicationInformation\":{\"channel\":\"EMAIL\",\"communicationVariables\":[],\"locale\":\"en_US\",\"physicalAddress\":{\"city\":\"P\",\"state\":\"FL\",\"zipCode\":\"33\",\"lines\":[\"48 \",\"#0114\",\"\"]},\"emailAddress\":\"cru@gmail.com\",\"isoCountryCode\":\"840\"},\"enrollmentInformation\":{\"id\":\"2023\",\"is_customer_offline\":false,\"channel_received_datetime\":\"20230109T171713.842 GMT\",\"dynamic_journey\":\"DYNAMIC_INSTANT\",\"rep_id\":\"HA6952B\",\"country_code\":\"840\",\"journey\":\"INSTANT_DECISION\",\"journey_stage\":\"SUPP",\"applicants\":[{\"number\":0,\"amex_relationship\":{\"relationships\":[{\"number\":\"3726\",\"type\""CARD\"}]},\"type\""PRIMARY\"},{\"number\":1,\"type\""NONPRIMARY\",\"has_spending_limit\":false,\"is_signature_available\":true,\"has_cash_restriction\":false,\"experience_id\""829e34d6-e89f-422b-b355-811b1aa2c79c\",\"names\":[{\"language\""EN\",\"name\":{\"first\""V\",\"last\""C\"}}],\"identifiers\":[{\"system\""DELIVERY_METHOD_IDENTIFIER\",\"id\""510DELVIDP256Cn+ has_same_address_as_primary\":false,\"emboss_name\""V\",\"language\""EN\",\"birth_date\""19\",\"spending_limit\":0,\"experience_choices\":[{\"selected_id\""USA_CONSUMER \",\"feature_name\""CARD_DESIGN\"}],\"product\":{\"offer_arrangement_id\""de7c960c46c7\",\"source_code\""A0000FYC4T\",\"short_product_id\""L81\",\"sub_product_code\""2X\"},\"addresses\":[{\"type\""HOME\",\"address\":{\"line1\""7B\",\"city\""HOUSTON\",\"region\""TX\",\"postal_code\""77028-4570\",\"country\""840\"}},{\"type\""TEMPORARY_ADDRESS\",\"address\":{\"line1\""790\",\"city\""HO\",\"region\""T\",\"postal_code\""77\",\"country\""840\"}}]}]},\"misProcessId\""3016428984\"}"}     @ITWhisperer @VatsalJagani please help
2023-01-09T16:46:00.780076351Z app_name=default-java environment=e3 ns=one pod_container=default-java pod_name=default stream=stdout message={"name":"com","timestamp":"2023-01-09T16:46:00.779Z","leve... See more...
2023-01-09T16:46:00.780076351Z app_name=default-java environment=e3 ns=one pod_container=default-java pod_name=default stream=stdout message={"name":"com","timestamp":"2023-01-09T16:46:00.779Z","level":"info","schemaVersion":"0.1","application":{"name":"com ","version":"1.2.5"},"request":{"address":{"uri":"Read/1.2.5"},"metadata":{"one-data-correlation-id":"d5d3 ","one-data-trace-id":"0be"}},"message":"Parent Function Address: Read, Request identifier: d5d35c6e-3661-4445-bbe4-f5a3f382d035, REQUEST-RECEIVED: {\"requestIdentifier\""d5 \",\"clientIdentifier\""CUST \",\"locale\""en-US\",\"userId\""lkapla\",\"accountNumber\""1234\",\"treatmentsFilter\":[\"targeted\",\"messages\"],\"callerType\""ADDTL\",\"cancelType\""\",\"handle\""gsp00a79e6b_b610_3407_90fa_11d5417c0b7f\",\"callTimeStamp\""1/9/2023 9:46:00 AM\",\"callIdentifier\""01091\",\"geoTelIdentifier\""04ba\"}, "}   I want to extract the time, userid and  clientIdentifier in a table?  
Splunk app inspect reports the "check_for_supported_tls" failure with the description as  -  If you are using requests.post to talk to your own infra with non-public PKI, make sure you bundle your ... See more...
Splunk app inspect reports the "check_for_supported_tls" failure with the description as  -  If you are using requests.post to talk to your own infra with non-public PKI, make sure you bundle your own CA certs as part of your app and pass the path into requests.post as an arg. I am using verify: false in the request.post() method and getting the above error in the app inspect tool.  
I am writing a custom search command that is quite performance sensitive. On every invocation the script is called twice and runs up to the prepare() method, which puts an unnecessary strain on the s... See more...
I am writing a custom search command that is quite performance sensitive. On every invocation the script is called twice and runs up to the prepare() method, which puts an unnecessary strain on the system. I could imagine that this is related to the command's general slowness, so the ability to disable it would be nice.
I am building a GeneratingCommand and even in the most basic version a lot of time passes between the invocation of prepare() (called after about 50ms after start) and generate() (called about 170ms ... See more...
I am building a GeneratingCommand and even in the most basic version a lot of time passes between the invocation of prepare() (called after about 50ms after start) and generate() (called about 170ms after start). We plan to invoke this command frequently, so this gap matters. How can I reduce it?
Yes, I am a beginner! I installed Splunk Enterprise Free Trial on Windows. But I wanted to install it on RHEL! Can I install another free trial on RHEL?
The page About non-Python custom search commands mentions that it is possible to write v2 custom search commands in languages other than Python, but there is absolutely no information about how such ... See more...
The page About non-Python custom search commands mentions that it is possible to write v2 custom search commands in languages other than Python, but there is absolutely no information about how such a thing would be implemented. What's the protocol? The closest thing to an explanation of the protocol I've found is NDietrich's GitHub repo, and their accompanying talk which I find rather disappointing. How come there is no official information to be found about it?
Hi all, I am a very new Splunk admin, and am trying to peel back the onion on the previous admin's shenanigans in this Splunk environment. I have a "dashboard" that was created by a user in the "sear... See more...
Hi all, I am a very new Splunk admin, and am trying to peel back the onion on the previous admin's shenanigans in this Splunk environment. I have a "dashboard" that was created by a user in the "search" app, and they have requested that I delete the dashboard for them, as they cannot.  What is the proper way to do this? The only mention I can find of it is on all 3 search head peers under the path "/opt/splunk/etc/apps/search/local/data/ui/views/${dashboard_name}.xml" I cannot find it on the cluster master, either in /etc/apps or /etc/shcluster/apps.  Please help me figure out what to do next.  Is it as simple as just removing that xml from all 3 search heads at the same time?  Thanks in advance. 
Does anyone know when the next version release number will be, and what the timeframe for this will be? I have an offline instance which we are about to start updating, but I don't want to do it ... See more...
Does anyone know when the next version release number will be, and what the timeframe for this will be? I have an offline instance which we are about to start updating, but I don't want to do it unless we have the most up to date version for a while. 
my url look like  https://google.demo.com/sites/demo/support/shared.demo/dump/  I want to regex https://google.demo.com/sites/demo/support/ what rex   
Hai All, Good day, we are using DB connect addon  to pull logs from multiple DB"s and created several inputs we want track activity  like If any user changes anything or creating new inputs in ... See more...
Hai All, Good day, we are using DB connect addon  to pull logs from multiple DB"s and created several inputs we want track activity  like If any user changes anything or creating new inputs in the DB Connect app or disabling inputs    any searches to check the activity for those
My current project polls a device every 15 minutes to pull a counter which is then charted. Thanks to members here, I now have this working as desired. Here is an example search: index=index | whe... See more...
My current project polls a device every 15 minutes to pull a counter which is then charted. Thanks to members here, I now have this working as desired. Here is an example search: index=index | where key="key_01" | timechart span=15m values(value) by mac_address The key "key_01" is a counter that increases over time. If there is no more activity, the key stays at its current value. So over time, we are counting totals. This produces a lovely line chart or bar chart. I would now like to be able to instead display the delta between the values, so instead of showing the accumulated total, we only see "new" counters since the last value - ie the delta. I've been reading posts and playing with the delta command but so far not been able to get it to work.  Here is what I thought I would need: index=index | where key="key_01" | delta key_01 as delta_01 | timechart span=15m values(value) by mac_address I would like to ask if anyone can help with getting the syntax right. As always, any help very much appreciated! NM  
dbxquery allows queries using direct SQL. However, dbxoutput is only possible with output objects defined in db-connect-app. Is there a fundamental reason for limiting this function (must be done on... See more...
dbxquery allows queries using direct SQL. However, dbxoutput is only possible with output objects defined in db-connect-app. Is there a fundamental reason for limiting this function (must be done only through DB-Connect-app)? For example, if it is a security problem, it is simply a matter of granting permission separately. I wonder if there might be some other underlying reason.   Text can be awkward with the help of a translator. please understand.
Hi Splunk Community,   I wondered if there was any way to match a keyword against a string in a lookup.  For example:   I have a lookup containing a field with a string:   items d... See more...
Hi Splunk Community,   I wondered if there was any way to match a keyword against a string in a lookup.  For example:   I have a lookup containing a field with a string:   items description "orange apple banana"  fruit   I have this field in my search results: item "apple"     |makeresults | eval item="apple"     Is there any way I can look-up the lookup above to match "apple" against "orange apple banana" and output "fruit" from the description field? I can achieve the reverse of this with wildcard matching (e.g. "orange apple banana" > *apple*), but haven't been able to find a way to match against a string. Does anyone know if this is possible? Thanks