All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello , I realy hope you can help me !! I have a json from API request (dynatrace). I would like to have the value agent version for each host  How can i do this ?  My command :  ***in... See more...
Hello , I realy hope you can help me !! I have a json from API request (dynatrace). I would like to have the value agent version for each host  How can i do this ?  My command :  ***index="dynatrace_hp" "agentVersion.major"="*" "agentVersion.major"="*" "agentVersion.minor"="*" esxiHostName="*" | stats values(esxiHostName, ) values(agentVersion.minor)***        Thx for you Help  !!!   
Hi, We are ingesting some logs into splunk in JSON format, the logs are ingested via TA. The value field in the below contains bank details which has to be masked.   PolicyDetails{}.Rules{}.Condi... See more...
Hi, We are ingesting some logs into splunk in JSON format, the logs are ingested via TA. The value field in the below contains bank details which has to be masked.   PolicyDetails{}.Rules{}.ConditionsMatched.SensitiveInformation{}.SensitiveInformationDetections.DetectedValues{}.Value
Hey all, Firstly - the title doesnt actually encapsulate what Im trying to do, Ill try break it down simply: I have AWS FlowLogs and AWS Route53 DNS resolver logs (in same index, different sourcety... See more...
Hey all, Firstly - the title doesnt actually encapsulate what Im trying to do, Ill try break it down simply: I have AWS FlowLogs and AWS Route53 DNS resolver logs (in same index, different sourcetypes) I want to search the FlowLogs but have it do a DNS lookup against the Resolver logs and then output it as a table. Right now I have a query like:     (index=aws sourcetype=flowlogs) | lookup dnslookup clientip as dest_ip OUTPUT clienthost as dest_DNS | lookup dnslookup clientip as src_ip OUTPUT clienthost as src_DNS | table _time dest_ip dest_DNS dest_port src_ip src_DNS src_port vpcflow_action   However, I would like to have the dest_ip and src_ip lookup against route53 resolver log, and then put THAT result in the table as dest_DNS and  src_DNS   Is this even possible?
I have a SPL, when first running the result is appearing but once the query is finished the error have shown below: | tstats `summariesonly` count(All_Traffic.dest_ip) as destination_ip_count, ... See more...
I have a SPL, when first running the result is appearing but once the query is finished the error have shown below: | tstats `summariesonly` count(All_Traffic.dest_ip) as destination_ip_count, count(All_Traffic.src_ip) as source_ip_count, count(All_Traffic.dest_port) as destination_port_count, count(All_Traffic.src_port) as source_port_count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src_ip, All_Traffic.src_port, All_Traffic.dest_ip, All_Traffic.protocol, All_Traffic.src_zone, All_Traffic.protocol_version, All_Traffic.action, _time | lookup 3rd_party_network_connections_vendor_ip.csv index_ip as All_Traffic.src_ip OUTPUT value_ip | where isnotnull(value_ip) AND All_Traffic.src_port !="53" AND (All_Traffic.action="blocked" OR All_Traffic.action="denied" OR All_Traffic.action="failed") AND source_ip_count > 40 AND destination_ip_count > 40 ----------------------- The error StatsFileWriterLz4 file open failed file=C:\Splunk\var\run\splunk\srtemp\910252184_17768_at_1638875294.1\statstmp_merged_5.sb.lz4   ------------- May you validate if my SPL query is correct or not? Thanks    
Hello I have a table with user gcid and user score and i want to show it as a bar chart so the Xis will be the gcid numbers and the yis will be the user score this is why i'm getting.. what am i m... See more...
Hello I have a table with user gcid and user score and i want to show it as a bar chart so the Xis will be the gcid numbers and the yis will be the user score this is why i'm getting.. what am i missing ?
My current query source="VLS_OUTSTANDING_GEO.csv" host="dev-bnk-loaniq-" sourcetype="csv" | geostats latfield=AREA_LATITUDE longfield=AREA_LONGITUDE sum(OST_AMT_FC_CURRENT) count by OST_CDE_RQST_CCY... See more...
My current query source="VLS_OUTSTANDING_GEO.csv" host="dev-bnk-loaniq-" sourcetype="csv" | geostats latfield=AREA_LATITUDE longfield=AREA_LONGITUDE sum(OST_AMT_FC_CURRENT) count by OST_CDE_RQST_CCY Giving below field in snip I want to show proper name instead of sum(OST_AMT_FC_CURRENT) in the tooltip. I want to show the summation of the count also in the tooltip. Like this Also is it possible not to show the latitude & longitude in the tooltip.
Hi,   Till now we only collected logs from production servers with Splunk. But soon we will onboard the system logs from non-prod (Linux, Windows) servers. What is the best way to differentiate be... See more...
Hi,   Till now we only collected logs from production servers with Splunk. But soon we will onboard the system logs from non-prod (Linux, Windows) servers. What is the best way to differentiate between the logs from different environents? different index? All these logs have the same retention time different sourcetype? All the logs are system logs (Windows, Linux) eventtype? a dedicated "environment" field? tagging? Thanks, Laci
Hi All , How can we implement Keyboard event(like key down/up and tab index) and mouse hover action on tooltip for textbox input in Splunk dashboard. Can someone help me with this requirement for ... See more...
Hi All , How can we implement Keyboard event(like key down/up and tab index) and mouse hover action on tooltip for textbox input in Splunk dashboard. Can someone help me with this requirement for making Splunk page more user friendly from accessibility point of view .
Greetings Fellow Splunkers, We have been recieving false reports claiming certain index, sourcetype and ip combinations that havent been communicating for a long time, however when checking over we ... See more...
Greetings Fellow Splunkers, We have been recieving false reports claiming certain index, sourcetype and ip combinations that havent been communicating for a long time, however when checking over we do actually seem to be recieving a healthy amount of logs from the combination of fields mentionned above. I have seen this in 2 other organizations aswell, what are some recommended fixes for this issue? has anyone else come across the same problem? Thanks,
Hello All,   We currently use the following search to list all the Windows hosts in our environment.      | tstats dc(host) where index=windows by host     Now, i have a requirement to filte... See more...
Hello All,   We currently use the following search to list all the Windows hosts in our environment.      | tstats dc(host) where index=windows by host     Now, i have a requirement to filter out  all Windows 10 systems  as in if the OS_Version field = Windows 10. Since the OS_Version field is not applicable to tstats , the only option i see is to use stats command as follows:     index=windows os_version="windows 10" | stats dc(host) by host     This search takes lot of time, runs  very slowly if i need query for Last 7 d time range.  I understand tstats is much faster as compared to stats and this slowness  with stats is bound to be there. Any thoughts, suggestions how to optimize this , make the search faster for  getting a list of distinct hosts , their count based on os_version ?  WHat would you all do in such a use case ?
Is there any solution to protect UF from stopping or uninstalling by users on endpoints? For example, most Antivirus agents are password protected and on uninstallation, users must provide the passwo... See more...
Is there any solution to protect UF from stopping or uninstalling by users on endpoints? For example, most Antivirus agents are password protected and on uninstallation, users must provide the password, I'm looking for this kind of solution. Thank you.
  TYPE Month KPI_1 KPI_2 GLOBAL Oct'21 76 24 LOCAL Oct'21 46 67   I'm searching the table like | search TYPE="GLOBAL" | search Month="Oct'21" Then i'm transposing the table... See more...
  TYPE Month KPI_1 KPI_2 GLOBAL Oct'21 76 24 LOCAL Oct'21 46 67   I'm searching the table like | search TYPE="GLOBAL" | search Month="Oct'21" Then i'm transposing the table after  deleting the months field | fields - Month | transpose header_field=TYPE column_name=KPI  My problem is sometimes when I'm searching something that is not there like Month="Sep'21" only the first column of the transposed table is coming like KPI KPI_1 KPI_2 How to show no results found instead of this 1 column table
Hi, I am using earliest and latest in sub search to get last 24 hrs data and compare it with last 7 days data to know the changes happened, using time range picker as last 7 days then which time ran... See more...
Hi, I am using earliest and latest in sub search to get last 24 hrs data and compare it with last 7 days data to know the changes happened, using time range picker as last 7 days then which time range my outer search will consider. Kindly assist
Hello All, We are currently testing Splunk with the intentions of having it collect our Security logs and other logs from domain controllers. Early on, we ran into an issue where user ids and gro... See more...
Hello All, We are currently testing Splunk with the intentions of having it collect our Security logs and other logs from domain controllers. Early on, we ran into an issue where user ids and group guids were being translated after getting ingested into Splunk.  A quick google search revealed a simple switch to a configuration item in a stanza, that would no longer translate the account guids.  While it's nice that the guids can be resolved, we want a one to one match of what is collected from the event log to be what is put into Splunk. There is a security event id 4625 that we collect.  In Splunk, there is a field called "Group Domain" field.  Some 4625 events appear as expected (correct group, correct domain etc), but others will show the Group Domain value as the name of the client computer that was generating the security event on the Domain Controller.  Incidentally, this same value appears for the "Source Workstation" field. We are trying to figure out why Splunk is populating the Group Domain field with the name of the workstation generating the security event, and if there is a way to tell Splunk to ignore trying to populate this data field, as it doesn't necessarily apply.  If you look at the XML of the event, no such field exists. Any help, guidance, etc. would be greatly appreciated. Regards, Blake
Hi, I'm setting up Splunk Universal Forwarder to watch logs generated from an application I have in AWS Elastic Beanstalk. This is done by running shell script installing the Universal Forwarder and ... See more...
Hi, I'm setting up Splunk Universal Forwarder to watch logs generated from an application I have in AWS Elastic Beanstalk. This is done by running shell script installing the Universal Forwarder and setting up monitors.   Simple enough. The problem is my application logs into a rolling file, meaning after a certain amount of data has been entered into the file (10MB in this example) it then creates a new file in the same location named "example 1.log" then "example 2.log", etc.   Currently I've tried using the below command to set up all the monitors with no success: /opt/splunkforwarder/bin/splunk add monitor "/var/logs/example*"   How can I capture all the files it will create?
Hi Hope you are well, I want to use splunk-agent-java and read description of this page https://github.com/splunk/splunk-agent-java 1-this link not work http://splunk-base.splunk.com/apps/25505/... See more...
Hi Hope you are well, I want to use splunk-agent-java and read description of this page https://github.com/splunk/splunk-agent-java 1-this link not work http://splunk-base.splunk.com/apps/25505/splunk-for-jmx 2-I download this file splunkagent.tar.gz and extact to this path /opt/splunkagent.jar on one of my server that splunk forwarder already installed on it. 3-here is my splunkagent.properties agent.app.name=sokantest agent.app.instance=MyJVM agent.userEventTags=key1=value1,key2=value2 splunk.transport.impl=com.splunk.javaagent.transport.SplunkTCPTransport splunk.transport.tcp.host=192.168.1.1 splunk.transport.tcp.port=9997 splunk.transport.tcp.maxQueueSize=5MB splunk.transport.tcp.dropEventsOnQueueFull=false trace.blacklist=com/sun,sun/,java/,javax/,com/splunk/javaagent/ trace.methodEntered=true trace.methodExited=true trace.classLoaded=true trace.errors=true trace.hprof=false trace.hprof.tempfile=mydump.hprof trace.hprof.frequency=600 trace.jmx=false trace.jmx.configfiles=jmx trace.jmx.default.frequency=60 4-should I do something on my server side? I can't find any index or sourcetype! 5-also read this https://www.slideshare.net/damiendallimore/splunk-java-agent Any idea?  @Damien_Dallimor   Thanks  
Splunk Query index="abc" source=def [| inputlookup ABC.csv | table text_strings count | rename text_strings as search] Problem:  I need to count the text_string values but when I run the above ... See more...
Splunk Query index="abc" source=def [| inputlookup ABC.csv | table text_strings count | rename text_strings as search] Problem:  I need to count the text_string values but when I run the above search which searches the text_strings but I dont find a field called search with which I can count  So need help @somesoni2 if you can help please
Hi ,  I have a transforms to send logs from prod hosts to one index and from non prod to other.  Transforms: [prod] DEST_KEY = MetaData:Index REGEX = (.*-prd.*) FORMAT = index_a [nonprod] DES... See more...
Hi ,  I have a transforms to send logs from prod hosts to one index and from non prod to other.  Transforms: [prod] DEST_KEY = MetaData:Index REGEX = (.*-prd.*) FORMAT = index_a [nonprod] DEST_KEY = MetaData:Index REGEX = (.*-nprd.*) FORMAT = index_b   Above transforms is working fine for all logs from those hosts. But now the problem is I only want it to be applicable to //var/log/messages and //var/log/secure.   any suggestions if I can multiple regex conditions based on host I.e. prd and source path ?   appreciate your help on this
I am using the following query and trying to display the results using stats but count by field values search query |  | table A B C D E | stats count values(A) as errors values(B)  values(C)  b... See more...
I am using the following query and trying to display the results using stats but count by field values search query |  | table A B C D E | stats count values(A) as errors values(B)  values(C)  by E Also tried  | stats  count by E A B C [but this messes up everything as this requires every field to have values] Current Output  E                                  count                  A.            B                   C     Value1.                     10.                        X              YY               ZZZ                                                                    Y               ZZ              BBB Output  E                                  count                  A.            B                   C     Value1.                       8.                        X              YY               ZZZ                                        2                          Y               ZZ              BBB   @somesoni2 
Hi, How to ingest MCAS Salesforce logs into splunk.