All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I just cheked on the  /system/default/web.conf there is the config that u mentioned before are commented.  It says that i have to set that in local/web.conf if I run my splunk behind the reverse pro... See more...
I just cheked on the  /system/default/web.conf there is the config that u mentioned before are commented.  It says that i have to set that in local/web.conf if I run my splunk behind the reverse proxy. Is that the correct location?  for the save way, do I have to copy that to /local first or I can just simply enable it?
I've written a Splunk Query and ran it, it's giving the result as expected but as soon as I click on "Create Table View" some of the field disappears which were earlier coming post the query run. Not... See more...
I've written a Splunk Query and ran it, it's giving the result as expected but as soon as I click on "Create Table View" some of the field disappears which were earlier coming post the query run. Not sure what is wrong, could anyone help?
JSON dashboard definition is for Studio not Classic. What is your question here (or does that already answer it!)?
Thank you @yuanliu  @jawahir007  Both of your solutions are working absolutely fine. @yuanliu  yes, index A always has larger number of hosts compared to index B. I would like to further expand this... See more...
Thank you @yuanliu  @jawahir007  Both of your solutions are working absolutely fine. @yuanliu  yes, index A always has larger number of hosts compared to index B. I would like to further expand this query to match the IP address aswell.  Can you provide some guidance around that. index A data  Hostname IP address OS xyz 190.1.1.1,  101.2.2.2, 102.3.3.3, 4.3.2.1 Windows zbc 100.0.1.0 Linux alb 190.1.0.2 Windows cgf 20.4.2.1 Windows bcn 20.5.3.4, 30.4.6.1 Solaris   Index B Hostname zbc 30.4.6.1 alb 101.2.2.2   Results Hostname IP address OS match xyz 190.1.1.1,  101.2.2.2, 102.3.3.3, 4.3.2.1 Windows ok(because IP address 101.2.2.2 is matching) zbc 100.0.1.0 Linux ok alb 190.1.0.2 Windows ok cgf 20.4.2.1 Windows missing(neither hostname is present nor the IP is matching) bcn 20.5.3.4, 30.4.6.1 Solaris yes(IP is matching) In my initial use case, I compared the hostnames in index A with those in index B. Now, I want to check if the hosts in index A are reporting their IP addresses in index B. If there’s a match, I will mark the corresponding hostname in index A as "ok."
Hello, guys! I'm trying to use the episodes table as the base search in the Edit Dashboard view, as well in the Dashboard Classic using the source, but here we already have the results in the table.... See more...
Hello, guys! I'm trying to use the episodes table as the base search in the Edit Dashboard view, as well in the Dashboard Classic using the source, but here we already have the results in the table. I'll attach my code snippet below:    { "dataSources": { "dsQueryCounterSearch1": { "options": { "query": "| where AlertSource = AWS and AlertSeverity IN (6,5,4,3,1) | dedup Identifier | stats count as AWS", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "mttrSearch": { "options": { "query": "| `itsi_event_management_get_mean_time(resolved)`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "episodesBySeveritySearch": { "options": { "query": "|`itsi_event_management_episode_by_severity`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "noiseReductionSearch": { "options": { "query": "| `itsi_event_management_noise_reduction`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "percentAckSearch": { "options": { "query": "| `itsi_event_management_get_episode_count(acknowledged)` | eval acknowledgedPercent=(Acknowledged/total)*100 | table acknowledgedPercent", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "mttaSearch": { "options": { "query": "| `itsi_event_management_get_mean_time(acknowledged)`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" } }, "visualizations": { "vizQueryCounterSearch1": { "title": "Query Counter 1", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0 }, "dataSources": { "primary": "dsQueryCounterSearch1" } }, "episodesBySeverity": { "title": "Episodes by Severity", "type": "splunk.bar", "options": { "backgroundColor": "#ffffff", "barSpacing": 5, "dataValuesDisplay": "all", "legendDisplay": "off", "showYMajorGridLines": false, "yAxisLabelVisibility": "hide", "xAxisMajorTickVisibility": "hide", "yAxisMajorTickVisibility": "hide", "xAxisTitleVisibility": "hide", "yAxisTitleVisibility": "hide" }, "dataSources": { "primary": "episodesBySeveritySearch" } }, "noiseReduction": { "title": "Total Noise Reduction", "type": "splunk.singlevalue", "options": { "backgroundColor": "> majorValue | rangeValue(backgroundColorThresholds)", "numberPrecision": 2, "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "%" }, "context": { "backgroundColorThresholds": [ { "from": 95, "value": "#65a637" }, { "from": 90, "to": 95, "value": "#6db7c6" }, { "from": 87, "to": 90, "value": "#f7bc38" }, { "from": 85, "to": 87, "value": "#f58f39" }, { "to": 85, "value": "#d93f3c" } ] }, "dataSources": { "primary": "noiseReductionSearch" } }, "percentAck": { "title": "Episodes Acknowledged", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "numberPrecision": 2, "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "%" }, "dataSources": { "primary": "percentAckSearch" } }, "mtta": { "title": "Mean Time to Acknowledged", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "minutes" }, "dataSources": { "primary": "mttaSearch" } } }, "layout": { "type": "grid", "options": { "display": "auto-scale", "height": 240, "width": 1440 }, "structure": [ { "item": "vizQueryCounterSearch1", "type": "block", "position": { "x": 0, "y": 80, "w": 288, "h": 220 } }, { "item": "episodesBySeverity", "type": "block", "position": { "x": 288, "y": 80, "w": 288, "h": 220 } }, { "item": "noiseReduction", "type": "block", "position": { "x": 576, "y": 80, "w": 288, "h": 220 } }, { "item": "percentAck", "type": "block", "position": { "x": 864, "y": 80, "w": 288, "h": 220 } }, { "item": "mtta", "type": "block", "position": { "x": 1152, "y": 80, "w": 288, "h": 220 } } ] } }       I really appreciate your help, have a great day
hello all can help me for this? i get data like this abc=1|productName= SHAMPTS JODAC RL MTV 36X(4X60G);ABC MANIS RL 12X720G;SO KLIN ROSE FRESH LIQ 24X200ML|field23=tip  i want to extract produ... See more...
hello all can help me for this? i get data like this abc=1|productName= SHAMPTS JODAC RL MTV 36X(4X60G);ABC MANIS RL 12X720G;SO KLIN ROSE FRESH LIQ 24X200ML|field23=tip  i want to extract productName but can't extract because value productName not using " " so I'm confused to extract it, I've tried it using the spl command | makemv delim=";" productName but the only result is SHAMPTS JODAC RL MTV 36X(4X60G). the rest doesn't appear. and also using regex with the command | makemv tokenizer="(([[:alnum:]]+ )+([[:word:]]+))" productName but the result is still the same. so is there any suggestion so that the value after ; can be extracted?
Solution : upgrading (therefore reinstalling ES) again to ES 7.3.2 solved the issue.
Hello, in clustered environment or standalone, after upgrading first Splunk core then Splunk ES, incident review not working anymore, not showing any notable. The macro `notable` is in error and we... See more...
Hello, in clustered environment or standalone, after upgrading first Splunk core then Splunk ES, incident review not working anymore, not showing any notable. The macro `notable` is in error and we can see SA-utils python errors in log files.
Think layers. HTTP vs. HTTPS is something that happens before even any HTTP request is being sent so it's enabled on a whole network port level and all HEC tokens are serviced by either HTTP or HTTPS... See more...
Think layers. HTTP vs. HTTPS is something that happens before even any HTTP request is being sent so it's enabled on a whole network port level and all HEC tokens are serviced by either HTTP or HTTPS input. Whether HTTP/HTTPS issue is important for you security-wise depends on your approach to the data you're ingesting - is it highly confidential and anyone eavesdropping into it on the wire is a great concern to you or not. While Splunk states that switching from HTTPS to HTTP can give a significant performance boost I'd be cautious with such general statements. It does depend on the hardware you're using and the volume of data you're processing. If you have a fairly modern server or a properly specced VM and you're not processing some humongous amounts of data you should be fairly ok with HTTPS enabled.
1. Enable audit creation on your database system. It's different in each RDBMS so you have to work with your DB admin on that. 2. Collect the log - as far as I remember, the MSSQL stores audit logs ... See more...
1. Enable audit creation on your database system. It's different in each RDBMS so you have to work with your DB admin on that. 2. Collect the log - as far as I remember, the MSSQL stores audit logs in a separate database so you have to use dbconnect to pull those entries from the database. MySQL I think simply writes audit to a flat text log file so you'll have to set up a file monitor input.
Actually, that's a problem which does not have a precise solution. Short of re-running the search and checking what kind of sourcetypes are returned (and even then it's not 100% sure because you can ... See more...
Actually, that's a problem which does not have a precise solution. Short of re-running the search and checking what kind of sourcetypes are returned (and even then it's not 100% sure because you can have some random aspects of the search) there's no way of knowing what sourcetypes were searched in general case. So everything you infer from your searches will only be some kind of heuristics - it will give you some probable overview of your searches results but short of recording access to each single event (which obviously Splunk does not do) there's no way of knowing which particular events/metric points were accessed and subsequently what were their metadata values.
What was the search you used to populate sourcetypes_1.csv?
Hi Splunk Experts, I had configured HEC and tried to send logs data via OTEL collector but I don't find service for collector. So, kindly suggest how to enable collector service to receive data from... See more...
Hi Splunk Experts, I had configured HEC and tried to send logs data via OTEL collector but I don't find service for collector. So, kindly suggest how to enable collector service to receive data from OTEL Collector. Much appreciated for your inputs. Regards, Eshwar
Hi marnall, I have changed fieldForLabel and fieldForValue to "APPLICATION". Still the dropdown menu is returning only "All". Could you please help ?..Below is the latest code. <form hideChrome="tr... See more...
Hi marnall, I have changed fieldForLabel and fieldForValue to "APPLICATION". Still the dropdown menu is returning only "All". Could you please help ?..Below is the latest code. <form hideChrome="true" version="1.1"> <label>SCODE_VIEW</label> <fieldset submitButton="false" autoRun="false">> <input type="time" token="field1" searchWhenChanged="true" > <label></label> <default> <earliest>-30d@d</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="SelectedApp" searchWhenChanged="true"> <label>Application Name</label> <Search> <query> index="idxmainframe" source="*_SCODE_DATA.CSV" earliest=$field1.earliest$ latest=$field1.latest$ | table APPLICATION | dedup APPLICATION | stats count by APPLICATION </query> </Search> <fieldForLabel>APPLICATION</fieldForLabel> <fieldForValue>APPLICATION</fieldForValue> <choice value="*">All</choice> <default>All</default> </input> </fieldset> <row> <panel> <table> <search> <query>| pivot Scode ds dc(USR_ID) AS "Distinct Count of USR_ID" SPLITROW APPLICATION AS APPLICATION SPLITROW MENU_DES AS MENU_DES SPLITROW REPORTING_DEPT AS REPORTING_DEPT SPLITCOL USER_TYPE BOTTOM 0 dc(USR_ID) ROWSUMMARY 0 COLSUMMARY 0 NUMCOLS 100 SHOWOTHER 1 | sort 0 APPLICATION MENU_DES REPORTING_DEPT </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>  
Thank you for your prompt reply .. Actually , I am having this search which lists the sourcetypes that have not been searched , but it is not very accurate so it might contain sourcetypes that are ... See more...
Thank you for your prompt reply .. Actually , I am having this search which lists the sourcetypes that have not been searched , but it is not very accurate so it might contain sourcetypes that are still searchable : index=_audit action=search info=granted | eval _raw=search | eval _raw=mvindex(split(_raw,"|"),0) | table _raw | extract | stats count by sourcetype | eval hasBeenSearched=1 | append [| metadata index=* type="sourcetypes" | eval hasBeenSearched="0"] | stats max(hasBeenSearched) as hasBeenSearched by sourcetype | search hasBeenSearched="0" So, I created a lookup into which I have put the sourcetypes that have been searched...I was thinking to reference this lookup  in the above mentioned query so that it could remove the sourcetypes that are searchable .. But the query is not giving me results . Can you please check where should i Adjust those commands related to referencing that lookup .. here is how I have used the query, but the results are not coming: index=_audit action=search info=granted | eval _raw=search | eval _raw=mvindex(split(_raw,"|"),0) | table _raw | extract | stats count by sourcetype | eval hasBeenSearched=1 | append [| metadata index=* type="sourcetypes" | eval hasBeenSearched="0"] | stats max(hasBeenSearched) as hasBeenSearched by sourcetype| search NOT [inputlookup sourcetypes_1.csv | fields sourcetype] | search hasBeenSearched="0"
Hi @gcusello, thanks for giving this quick reply.   I checked the filename either manually and second time by using the following command: | inputlookup ldap_users.csv   This returns the lookup... See more...
Hi @gcusello, thanks for giving this quick reply.   I checked the filename either manually and second time by using the following command: | inputlookup ldap_users.csv   This returns the lookup as expected. I can see and edit my lookup with the lookup editor app. I also created an Lookup definition and set the permissions on both the lookup and the lookup definition to global read. I also use the lookup in my Enterprise Security Asset Management - and there it works flawlessly.   However, I managed to just utilize the merged identity lookup that Enterprise Security creates. It is not the solution to the original problem - but solves my usecase.   So for me the solution is to just utlitze another lookup: index=main source_type=some_event_related_to_users | lookup identity_lookup_expanded identity as src_user  
Do you get the same results when running the command line on all nodes in the cluster? How many nodes do you have?
"non searchable" is not the same as "have not been searched" The problem with this sort of search is that Splunk is not good at finding things which aren't there! You could search through the intern... See more...
"non searchable" is not the same as "have not been searched" The problem with this sort of search is that Splunk is not good at finding things which aren't there! You could search through the internal logs to see what searches have been executed and extract from that which sourcetypes have been specified. This would give you a list of sourcetypes which have been searched specifically, but if sourcetype is not used in the search, all sourcetypes for the index specified could be searched, so do you want to include those as having been searched or say that they haven't been searched? Anyway, having got a list of sourcetypes which have been searched, you should compare this to a list of all sourcetypes to determine which ones "have not been searched" (given the caveats just mentioned).
Hello All, I am looking for a query that can provide me with a list of sourcetypes that have not been searched .Kindly suggest.