All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Did an upgrade from Splunk ES 6.1.1 to 6.6.2 and now any dashboard that uses the Downsampled Line Chart viz fails to load with a "Failed to load source for Downsampled Line Chart visualization. " In... See more...
Did an upgrade from Splunk ES 6.1.1 to 6.6.2 and now any dashboard that uses the Downsampled Line Chart viz fails to load with a "Failed to load source for Downsampled Line Chart visualization. " In fact, any of the visualiztions below the "More" header when selecting a chart type (includes histogram, box plot, 3D scatter plot.) Has anyone run into such an issue after upgrading?
Hi, I have the below query, I need the scatter point visualozation for this. time on the x axis and the count on the y axis. How to achieve this. |inputlookup hsbc_es_pr_mapping.csv | eval "Configu... See more...
Hi, I have the below query, I need the scatter point visualozation for this. time on the x axis and the count on the y axis. How to achieve this. |inputlookup hsbc_es_pr_mapping.csv | eval "Configuration Item" = lower('Configuration Item') | lookup hsbc_dc_app_eim_lookup_eim_basic_extract.csv hostname as "Configuration Item" OUTPUT IT_SERVICE | search Status = Open | fields "Problem Number" IT_SERVICE | stats count as "Count of PR's" by IT_SERVICE | sort 10 - "Count of PR's" Thanks
Is there any easy way to enable/disable indexing of a debug log file so that it can be indexed only when needed? We have some debug log files that are used primarily during rollouts of new features a... See more...
Is there any easy way to enable/disable indexing of a debug log file so that it can be indexed only when needed? We have some debug log files that are used primarily during rollouts of new features and testing cycles. We would love to have the data in splunk, but most of the time it is not needed.
i have a query like index = xyz | eval assignment= upper(assignment) | eval SO = upper(SO) | eval Ser = upper(Ser) | join type=inner assignment,SO,Ser [ I inputlookup xyz.csv | table assign... See more...
i have a query like index = xyz | eval assignment= upper(assignment) | eval SO = upper(SO) | eval Ser = upper(Ser) | join type=inner assignment,SO,Ser [ I inputlookup xyz.csv | table assignment,SO,Ser | eval assignment= upper(assignment) | eval SO = upper(SO) | eval Ser = upper(Ser) ] is this is a valid query because i want only the events containing the common fields (assignment,SO,Ser).
The package's default/distsearch.conf contains a stanza, apparently to exclude the package itself from search bundles:   [replicationDenylist] noanaconda = apps[/\\]Splunk_SA_Scientific_Python*[/\\... See more...
The package's default/distsearch.conf contains a stanza, apparently to exclude the package itself from search bundles:   [replicationDenylist] noanaconda = apps[/\\]Splunk_SA_Scientific_Python*[/\\]...   Except that, there is no "replicationDenylist" in this conf file, according to the documentation. It should have been "replicationBlacklist" according to the document and our experiment. This package is big, so when it is not excluded from search bundles, it causes search bundle size to exceed the size limitation. I reported this to Splunk in a support case. But the support engineer insists that "this is not a bug, this is just information wrongly added in the documentation."
I am trying to extract the action=* from this field, in this event its add. I've trying extracting through how you would typically extract fields but it doesn't want to capture all the different poss... See more...
I am trying to extract the action=* from this field, in this event its add. I've trying extracting through how you would typically extract fields but it doesn't want to capture all the different possible events, action=delete, action=replace etc.     UPDATE#011class=DATASET#011prof=IMSVS.*#011vol=P1CP02#011dsn=IMSVS.BETALIBA#011member=PYNMU49#011box=HTC-95-000000033771-0094#011action=ADD#011sum=PJXCPAI6  So I resorted to trying to manually write my own regex (?<=action=).*(?=#) but I cant seem to get the rex command to work or manually add my regex to the filed extraction rex field=intent (?<=action=).*(?=#)  I get this error message when using the rex command above. Error in 'rex' command: The regex '(?<=action=).*(?=#)' does not extract anything. It should specify at least one named group. Format: (?<name>...).
I have a quite unusual case. One of my sources emits logs with a very stupid timestamp format. It consists of a date and time glued together, which on its own is quite ok, but followed with a timezon... See more...
I have a quite unusual case. One of my sources emits logs with a very stupid timestamp format. It consists of a date and time glued together, which on its own is quite ok, but followed with a timezone info in form of time difference vs UTC expressed... in minutes. So it's not your typical "+0200". No. It's "+120". There's no such timezone format in your strptime format specification so I have to do it some other way. Since _time is crucial to the proper event processing, of course I have to adjust it in ingest time. I thought about parsing the offset from the timestamp as an independent field and then correcting the _time field before indexing the event. Does it make sense? I don't see any other way of producing correct timestamp from such data.
I have an alert. which runs every minute "cron" and the setting is set to "time range last 4 minutes" but for some reason my alert works 30 minutes ago and in this interval look for 4 minutes    cr... See more...
I have an alert. which runs every minute "cron" and the setting is set to "time range last 4 minutes" but for some reason my alert works 30 minutes ago and in this interval look for 4 minutes    create at 4:46:04 PM  search 4:16:00 pm to 4:20:00 pm      Time is correct on SH servers.  
Hello , I realy hope you can help me !! I have a json from API request (dynatrace). I would like to have the value agent version for each host  How can i do this ?  My command :  ***in... See more...
Hello , I realy hope you can help me !! I have a json from API request (dynatrace). I would like to have the value agent version for each host  How can i do this ?  My command :  ***index="dynatrace_hp" "agentVersion.major"="*" "agentVersion.major"="*" "agentVersion.minor"="*" esxiHostName="*" | stats values(esxiHostName, ) values(agentVersion.minor)***        Thx for you Help  !!!   
Hi, We are ingesting some logs into splunk in JSON format, the logs are ingested via TA. The value field in the below contains bank details which has to be masked.   PolicyDetails{}.Rules{}.Condi... See more...
Hi, We are ingesting some logs into splunk in JSON format, the logs are ingested via TA. The value field in the below contains bank details which has to be masked.   PolicyDetails{}.Rules{}.ConditionsMatched.SensitiveInformation{}.SensitiveInformationDetections.DetectedValues{}.Value
Hey all, Firstly - the title doesnt actually encapsulate what Im trying to do, Ill try break it down simply: I have AWS FlowLogs and AWS Route53 DNS resolver logs (in same index, different sourcety... See more...
Hey all, Firstly - the title doesnt actually encapsulate what Im trying to do, Ill try break it down simply: I have AWS FlowLogs and AWS Route53 DNS resolver logs (in same index, different sourcetypes) I want to search the FlowLogs but have it do a DNS lookup against the Resolver logs and then output it as a table. Right now I have a query like:     (index=aws sourcetype=flowlogs) | lookup dnslookup clientip as dest_ip OUTPUT clienthost as dest_DNS | lookup dnslookup clientip as src_ip OUTPUT clienthost as src_DNS | table _time dest_ip dest_DNS dest_port src_ip src_DNS src_port vpcflow_action   However, I would like to have the dest_ip and src_ip lookup against route53 resolver log, and then put THAT result in the table as dest_DNS and  src_DNS   Is this even possible?
I have a SPL, when first running the result is appearing but once the query is finished the error have shown below: | tstats `summariesonly` count(All_Traffic.dest_ip) as destination_ip_count, ... See more...
I have a SPL, when first running the result is appearing but once the query is finished the error have shown below: | tstats `summariesonly` count(All_Traffic.dest_ip) as destination_ip_count, count(All_Traffic.src_ip) as source_ip_count, count(All_Traffic.dest_port) as destination_port_count, count(All_Traffic.src_port) as source_port_count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src_ip, All_Traffic.src_port, All_Traffic.dest_ip, All_Traffic.protocol, All_Traffic.src_zone, All_Traffic.protocol_version, All_Traffic.action, _time | lookup 3rd_party_network_connections_vendor_ip.csv index_ip as All_Traffic.src_ip OUTPUT value_ip | where isnotnull(value_ip) AND All_Traffic.src_port !="53" AND (All_Traffic.action="blocked" OR All_Traffic.action="denied" OR All_Traffic.action="failed") AND source_ip_count > 40 AND destination_ip_count > 40 ----------------------- The error StatsFileWriterLz4 file open failed file=C:\Splunk\var\run\splunk\srtemp\910252184_17768_at_1638875294.1\statstmp_merged_5.sb.lz4   ------------- May you validate if my SPL query is correct or not? Thanks    
Hello I have a table with user gcid and user score and i want to show it as a bar chart so the Xis will be the gcid numbers and the yis will be the user score this is why i'm getting.. what am i m... See more...
Hello I have a table with user gcid and user score and i want to show it as a bar chart so the Xis will be the gcid numbers and the yis will be the user score this is why i'm getting.. what am i missing ?
My current query source="VLS_OUTSTANDING_GEO.csv" host="dev-bnk-loaniq-" sourcetype="csv" | geostats latfield=AREA_LATITUDE longfield=AREA_LONGITUDE sum(OST_AMT_FC_CURRENT) count by OST_CDE_RQST_CCY... See more...
My current query source="VLS_OUTSTANDING_GEO.csv" host="dev-bnk-loaniq-" sourcetype="csv" | geostats latfield=AREA_LATITUDE longfield=AREA_LONGITUDE sum(OST_AMT_FC_CURRENT) count by OST_CDE_RQST_CCY Giving below field in snip I want to show proper name instead of sum(OST_AMT_FC_CURRENT) in the tooltip. I want to show the summation of the count also in the tooltip. Like this Also is it possible not to show the latitude & longitude in the tooltip.
Hi,   Till now we only collected logs from production servers with Splunk. But soon we will onboard the system logs from non-prod (Linux, Windows) servers. What is the best way to differentiate be... See more...
Hi,   Till now we only collected logs from production servers with Splunk. But soon we will onboard the system logs from non-prod (Linux, Windows) servers. What is the best way to differentiate between the logs from different environents? different index? All these logs have the same retention time different sourcetype? All the logs are system logs (Windows, Linux) eventtype? a dedicated "environment" field? tagging? Thanks, Laci
Hi All , How can we implement Keyboard event(like key down/up and tab index) and mouse hover action on tooltip for textbox input in Splunk dashboard. Can someone help me with this requirement for ... See more...
Hi All , How can we implement Keyboard event(like key down/up and tab index) and mouse hover action on tooltip for textbox input in Splunk dashboard. Can someone help me with this requirement for making Splunk page more user friendly from accessibility point of view .
Greetings Fellow Splunkers, We have been recieving false reports claiming certain index, sourcetype and ip combinations that havent been communicating for a long time, however when checking over we ... See more...
Greetings Fellow Splunkers, We have been recieving false reports claiming certain index, sourcetype and ip combinations that havent been communicating for a long time, however when checking over we do actually seem to be recieving a healthy amount of logs from the combination of fields mentionned above. I have seen this in 2 other organizations aswell, what are some recommended fixes for this issue? has anyone else come across the same problem? Thanks,
Hello All,   We currently use the following search to list all the Windows hosts in our environment.      | tstats dc(host) where index=windows by host     Now, i have a requirement to filte... See more...
Hello All,   We currently use the following search to list all the Windows hosts in our environment.      | tstats dc(host) where index=windows by host     Now, i have a requirement to filter out  all Windows 10 systems  as in if the OS_Version field = Windows 10. Since the OS_Version field is not applicable to tstats , the only option i see is to use stats command as follows:     index=windows os_version="windows 10" | stats dc(host) by host     This search takes lot of time, runs  very slowly if i need query for Last 7 d time range.  I understand tstats is much faster as compared to stats and this slowness  with stats is bound to be there. Any thoughts, suggestions how to optimize this , make the search faster for  getting a list of distinct hosts , their count based on os_version ?  WHat would you all do in such a use case ?
Is there any solution to protect UF from stopping or uninstalling by users on endpoints? For example, most Antivirus agents are password protected and on uninstallation, users must provide the passwo... See more...
Is there any solution to protect UF from stopping or uninstalling by users on endpoints? For example, most Antivirus agents are password protected and on uninstallation, users must provide the password, I'm looking for this kind of solution. Thank you.
  TYPE Month KPI_1 KPI_2 GLOBAL Oct'21 76 24 LOCAL Oct'21 46 67   I'm searching the table like | search TYPE="GLOBAL" | search Month="Oct'21" Then i'm transposing the table... See more...
  TYPE Month KPI_1 KPI_2 GLOBAL Oct'21 76 24 LOCAL Oct'21 46 67   I'm searching the table like | search TYPE="GLOBAL" | search Month="Oct'21" Then i'm transposing the table after  deleting the months field | fields - Month | transpose header_field=TYPE column_name=KPI  My problem is sometimes when I'm searching something that is not there like Month="Sep'21" only the first column of the transposed table is coming like KPI KPI_1 KPI_2 How to show no results found instead of this 1 column table