All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

With 3.1 now being the LTS version for .NET Core, when will support for this be coming to the .NET Core Agent?
I have two manageable linux servers with universal forwarder, both have the same host name, when you check the "forwarder management" menu, only one server appears at a time. that's why I want to dif... See more...
I have two manageable linux servers with universal forwarder, both have the same host name, when you check the "forwarder management" menu, only one server appears at a time. that's why I want to differentiate them by hostname.
Im a bit new to deploying forwarders on endpoints i manage (im not new to splunk)- Many guides i see (including the install instructions for this sysmon TA), state that you should deploy this TA o... See more...
Im a bit new to deploying forwarders on endpoints i manage (im not new to splunk)- Many guides i see (including the install instructions for this sysmon TA), state that you should deploy this TA onto your forwarders. To do this, the user will need to manually create a outputs.conf file (w indexer IP/dns) and place it in: \TA-microsoft-sysmon\default\ So why is there not a default/blank output.conf file located in \TA-microsoft-sysmon\default\ , from the start? (or even a blank file, with just a #nothing line? I get that the devs dont know the IP / DNS of our indexers). (im not complaining about this , im asking this incase im missing something and so that i can better understand, as it would seem to me a majority of users of this TA will be deploying it on forwarders as well as their indexer- so im wondering why there is not a outputs.conf "place holder"). thanks!
Hi, I´ve got this event -> 2020/02/14/16:12:28:872 MachineNumber="K003991_HT" Pass="FPPPPPPFPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP" Each position o... See more...
Hi, I´ve got this event -> 2020/02/14/16:12:28:872 MachineNumber="K003991_HT" Pass="FPPPPPPFPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP" Each position of the pass value gives an pass or fail for one position (1..80 but can also be only 1..45). For example Pass="FPPF" says -> Position_1 = Fail Position_2 = Pass Position_3 = Pass Position_4 = Fail Now I want to buld an table to show which position has how much fails of all events. How to do this? One possibility could be to use mvexpand and build more events. For example, build from this -> 2020/02/14/16:12:28:872 MachineNumber="K003991_HT" Pass="FPPF" that events -> 2020/02/14/16:12:28:872 MachineNumber="K003991_HT" Pass="Fail" Position="1" 2020/02/14/16:12:28:872 MachineNumber="K003991_HT" Pass="Pass" Position="2" 2020/02/14/16:12:28:872 MachineNumber="K003991_HT" Pass="Pass" Position="3" 2020/02/14/16:12:28:872 MachineNumber="K003991_HT" Pass="Fail" Position="4" ...but how is it possible do do this? Or is there an other possibility to buld my table? Thanks!
Hi All, I'm stumped on the following search. The scenario is I'm trying to track the amount of time a support ticket is assigned to a support team and specific status, for the lifecycle of the ti... See more...
Hi All, I'm stumped on the following search. The scenario is I'm trying to track the amount of time a support ticket is assigned to a support team and specific status, for the lifecycle of the ticket. The following |streamstats works great, assuming the ticket doesn't get assigned to the same team and status twice. (getting assigned out and back in) It currently sums the time between. Again, I only want to sum the time in a team and status, not including the time between where it goes out. |dedup ticket_id,_time,ticket_arvig_status |eval temp2=id+","+ticket_status |search (ticket_team="TIER 2" AND ticket_status="tier 2 needed" |streamstats range(_time) AS StatusDuration by ticket_id global=f window=2 |stats sum(StatusDuration) AS TotalStatusDuration by ticket_id, ticket_status, ticket_team |stats avg(TotalStatusDuration) as averageage by ticket_id Any help would be appreciated!
Hi, we want to parse the logs on HF before logs are forwarded to indexers. logs are forwarded from universal forwarder to heavy forwarder. I have given sourcetype in inputs.conf on UF and creat... See more...
Hi, we want to parse the logs on HF before logs are forwarded to indexers. logs are forwarded from universal forwarder to heavy forwarder. I have given sourcetype in inputs.conf on UF and created props.conf with same sourcetype value stanza( extract = regex). logs are comeing with sourcetype i have given in inputs.conf but its not picking file in props.conf where the regex is. so logs are not parsed. i have tested props.conf and it parsed correctly on test environment but this by uploading file. Am i missing anything here ?
Hi everybody, I need to find out all the servers on which the Windows EventID=XYZ is not logged. Therefore I run a search for all servers in my index (to have all the servers) and then I do an i... See more...
Hi everybody, I need to find out all the servers on which the Windows EventID=XYZ is not logged. Therefore I run a search for all servers in my index (to have all the servers) and then I do an inner search where I only search for servers where at least one single time the EventID=XYZ was logged. When I now subtract this result from the "all servers" result only those should remain which didn't log the EventID=XYZ. But how is this done? index=servers [search index=servers EventID=XYZ | stats values(host) as not_wanted_servers | fields not_wanted_servers] | stats values(host) as target_servers |where target_servers NOT in not_wanted_servers The last line doesn't work but should show what I want to do.
Hello Team, I am new in Splunking , I need to understand few thing ,could anyone please answer the questions : 1.) How to make list of sourcetype and eventtype that need to be fixed to allow... See more...
Hello Team, I am new in Splunking , I need to understand few thing ,could anyone please answer the questions : 1.) How to make list of sourcetype and eventtype that need to be fixed to allow for proper data model 2.) How to identify incorrect Aliased /extracted fields ? 3.)How to Determine the sourcetype associated with incorrect /unknown fields 4.) how to identified incorrect /unknown fields from datamodel what are the steps to fix, Sorry these are common question but being new I need to create report for it !! Thank in Advance !!!
I am trying to pull fields out of .xml file where I can make sense of them and put the info into a dashboard. I am trying to pull the ruleID, ruleResult, and result count out where they are relation... See more...
I am trying to pull fields out of .xml file where I can make sense of them and put the info into a dashboard. I am trying to pull the ruleID, ruleResult, and result count out where they are relational to each other so I have (CVE#, Fail or Fixed, count#). I tried making new fields but Splunk doesn't see that these fields have any relation to each other and they just come up as individuals. <summRes:ruleResult ruleID="CVE-2000-1985"> <summRes:ident>CVE-2000-1985</summRes:ident> <summRes:ruleComplianceItem ruleResult="fail"> <summRes:result count="15489"/> </summRes:ruleComplianceItem> </summRes:ruleResult> <summRes:ruleResult ruleID="CVE-2000-1820"> <summRes:ident>CVE-2000-1820</summRes:ident> <summRes:ruleComplianceItem ruleResult="fail"> <summRes:result count="14560"/> </summRes:ruleComplianceItem> </summRes:ruleResult> <summRes:ruleResult ruleID="CVE-2000-4568"> <summRes:ident>CVE-2000-4568</summRes:ident> <summRes:ruleComplianceItem ruleResult="fail"> <summRes:result count="13458"/> </summRes:ruleComplianceItem> </summRes:ruleResult> <summRes:ruleResult ruleID="CVE-2000-1156"> <summRes:ident>CVE-2000-1156</summRes:ident> <summRes:ruleComplianceItem ruleResult="fail"> <summRes:result count="12567"/> </summRes:ruleComplianceItem> </summRes:ruleResult> <summRes:ruleResult ruleID="CVE-2000-5641"> <summRes:ident>CVE-2000-5641</summRes:ident> <summRes:ruleComplianceItem ruleResult="fail"> <summRes:result count="11243"/> </summRes:ruleComplianceItem> </summRes:ruleResult> <summRes:ruleResult ruleID="CVE-2000-1985"> <summRes:ident>CVE-2000-1985</summRes:ident> <summRes:ruleComplianceItem ruleResult="fixed"> <summRes:result count="900"/> </summRes:ruleComplianceItem> </summRes:ruleResult> <summRes:ruleResult ruleID="CVE-2000-1156"> <summRes:ident>CVE-2000-1156</summRes:ident> <summRes:ruleComplianceItem ruleResult="fixed"> <summRes:result count="726"/> </summRes:ruleComplianceItem> </summRes:ruleResult> <summRes:ruleResult ruleID="CVE-2000-4568"> <summRes:ident>CVE-2000-4568</summRes:ident> <summRes:ruleComplianceItem ruleResult="fixed"> <summRes:result count="455"/> </summRes:ruleComplianceItem> </summRes:ruleResult>
Hi Team I have following details One of autosys job is running for 20 hours with the status recording in the logs as RUNNING recording only one event with status in the logs .i.e when it change... See more...
Hi Team I have following details One of autosys job is running for 20 hours with the status recording in the logs as RUNNING recording only one event with status in the logs .i.e when it changed from the status STARTING to RUNNING. I would like to show the same status of the Job for that 20 hours duration in the timechart . I am using the following query, but no able to get the solution. can you please help. index=infra_apps sourcetype=ca:atsys:edemon:txt | rename hostname as host | fields Job host Autosysjob_time Status | lookup datalakenodeslist.csv host OUTPUT cluster | mvexpand cluster | search Status=STARTING AND cluster=* AND host="" AND Job= | dedup Job Autosysjob_time host | timechart span=5m count(Job) by cluster You help is much appreciated.
I am very new to splunk. I am having 100+ web service automated using java. I need to integrate with SPLUNK. Please help me over here After integration i can able to view the log in splunk on eve... See more...
I am very new to splunk. I am having 100+ web service automated using java. I need to integrate with SPLUNK. Please help me over here After integration i can able to view the log in splunk on every webservice execution
I have a dynamic set of result data which I'd like to extract when the beginning of a line is the same across multiple values For instance based on this data: FooBarBla FooBar FooBar_Brr I... See more...
I have a dynamic set of result data which I'd like to extract when the beginning of a line is the same across multiple values For instance based on this data: FooBarBla FooBar FooBar_Brr I'd like to end up with: FooBar Based on this set of data: foo_bar_brr foo_bar_grr foo_bar_gr I'd like to end up with: foo_bar The challenge I face is data is different all the time and it depends on a host input so I need do some sort of comparison between the lines and the extract the matching bit at the beginning of it. Any ideas how can I achieve this, if at all possible?
Hi, I have a query like below. index=linux sourcetype=iostat mount="*" which will list total_ops for each mount of a host in multiple events. i need to get sum of total_ops of each host o... See more...
Hi, I have a query like below. index=linux sourcetype=iostat mount="*" which will list total_ops for each mount of a host in multiple events. i need to get sum of total_ops of each host of all mounts from latest event. Please help
Can access restrictions be put on a lookup automatically upon creation? For example: User A creates a lookup <-- can this lookup be automatically restricted so that User B can not search the conte... See more...
Can access restrictions be put on a lookup automatically upon creation? For example: User A creates a lookup <-- can this lookup be automatically restricted so that User B can not search the contents ? I know this can be done manually by setting the read permissions (select roles) on the lookup but is there a way to automatically set the permissions to be restrictive upon creation?
Hi, We are receiving the event in json format and given the _raw event below. I am trying to extract the fields in search time through props and transforms from a particular field but it is not wo... See more...
Hi, We are receiving the event in json format and given the _raw event below. I am trying to extract the fields in search time through props and transforms from a particular field but it is not working _raw event [{"command":"gitly-upload-pack tcp://prod-gitly-primary.domain.com:80 {\"repository\":{\"storage_name\":\"default\",\"relative_path\":\"infrastructure/app-config-iam-lint-rules.git\",\"git_object_directory\":\"\",\"git_alternate_object_directories\":[],\"gl_repository\":\"project-139\",\"gl_project_path\":\"infrastructure/app-config-iam-lint-rules\"},\"gl_repository\":\"project-139\",\"gl_project_path\":\"infrastructure/app-config-iam-lint-rules\",\"gl_id\":\"key-id\",\"gl_username\":\"uname\",\"git_config_options\":[],\"git_protocol\":null}","user":"user with id key-7260","pid":6055,"level":"info","msg":"executing git command","time":"2020-02-14T11:23:34+00:00","instance_id":"instanceid","instance_type":"m5.4xlarge","az":"us-east-1b","private_ip":"x.x.x.x","vpc_id":"vpc-id","ami_id":"ami-id","account_id":"12345","vpc":"infra-vpc","log_env":"prod","fluent_added_timestamp":"2020-02-14T11:23:36.397+0000","@timestamp":"2020-02-14T11:23:36.397+0000","SOURCE_REALTIME_TIMESTAMP":"1581679416397075","MESSAGE":"executing git command"} Below is the value assigned to command field and i am trying to split into multiple fields, gitly-upload-pack tcp://prod-gitly-primary.domain.com:80 {"repository":{"storage_name":"default","relative_path":"infrastructure/app-config-iam-lint-rules.git","git_object_directory":"","git_alternate_object_directories":[],"gl_repository":"project-139","gl_project_path":"infrastructure/app-config-iam-lint-rules"},"gl_repository":"project-139","gl_project_path":"infrastructure/app-config-iam-lint-rules","gl_id":"key-id","gl_username":"uname","git_config_options":[],"git_protocol":null} It is extracted as expected through rex search cmd. **searchquery | rex field=command "^(?<git_command>[^\s]+)\s(?<git_url>[^\s]+)\s(?<git_json>.*)" | spath input=git_json** i am trying to put it through props and transforms but not working [sourcetype] REPORT-command = morefields_from_command [morefields_from_command] kv_mode = json SOURCE_KEY = command REGEX = (?<git_command>\S+)\s(?<git_url>\S+)\s(?<git_json>.*) my requirement is git_command = gitly-upload-pack git-url = tcp://prod-gitly-primary.domain.com:80 git_json = {"repository":{"storage_name":"default","relative_path":"infrastructure/app-config-iam-lint-rules.git","git_object_directory":"","git_alternate_object_directories":[],"gl_repository":"project-139","gl_project_path":"infrastructure/app-config-iam-lint-rules"},"gl_repository":"project-139","gl_project_path":"infrastructure/app-config-iam-lint-rules","gl_id":"key-id","gl_username":"uname","git_config_options":[],"git_protocol":null} once this done, then i have again split it from git_json as below storage_name = default relative_path=infrastructure/app-config-iam-lint-rules.git .. .. .. git_protocol= null
Hi All, I am trying to build a query through which we can track if all the Splunk forwarders are connected to Cluster Master. Wanted to create an alert if there are issues when forwarder is not ab... See more...
Hi All, I am trying to build a query through which we can track if all the Splunk forwarders are connected to Cluster Master. Wanted to create an alert if there are issues when forwarder is not able to connect with Cluster master. Could you please help with the query.
I have a Clustered environment and monitoring setup for application logs,universal forwarders push data to indexers . Lately , I have been facing issue where the application logs are getting indexe... See more...
I have a Clustered environment and monitoring setup for application logs,universal forwarders push data to indexers . Lately , I have been facing issue where the application logs are getting indexed and available . But after few hours when I search for the present day logs on splunk there are 0 events for 2 3 hours time frame and indexed data vanishes. Not sure if this is a known issue , any help would be of much help. Thanks
Hi at all, I have a very strange problem that I'm trying to solve. I have a data source with the following fields: user dest_ip start_time end_time I have to understand how long a... See more...
Hi at all, I have a very strange problem that I'm trying to solve. I have a data source with the following fields: user dest_ip start_time end_time I have to understand how long a user used network connections in each hour. the problem is that I could have parallel sessions that I cannor sum because I could have more than 60 minutes of connection in one hour and it isn't acceptable. In addition I could have a connection from 10.05 to 10.10 and another from 10.45 to 10.50 so I cannot take the start of the first and the end of the second. Someone can hint how to approach the problem? Ciao. Giuseppe
Hi Team, Can anyone help me on this - I want to Get columns that have non-zero values over time (using timechart). _time Column1 Column2 Column3 Column4 Column5 Column N 2/14/2020 ... See more...
Hi Team, Can anyone help me on this - I want to Get columns that have non-zero values over time (using timechart). _time Column1 Column2 Column3 Column4 Column5 Column N 2/14/2020 2:11 0 0 0 0 0 0 2/14/2020 2:12 0 0 0 0 0 0 2/14/2020 2:13 1 0 0 0 0 0 2/14/2020 2:14 0 0 1 0 0 0 2/14/2020 2:15 0 0 0 5 0 0 2/14/2020 2:16 0 0 0 0 0 0 2/14/2020 2:17 0 0 0 0 0 0 2/14/2020 2:18 0 0 0 0 0 0 The query I am using (But I am not able to remove zero value columns ) index=servers sourcetype=server_list Columns ="*" | timechart span=1m count as Total by Columns | where Columns > 0
I could just find zip files below at download.appdynamics.com: Java Agent - Sun and JRockit JVM (zip) 4.5.18.29239 and older versions Java Agent - IBM JVM (zip) 4.5.18.29239 and older versions Ja... See more...
I could just find zip files below at download.appdynamics.com: Java Agent - Sun and JRockit JVM (zip) 4.5.18.29239 and older versions Java Agent - IBM JVM (zip) 4.5.18.29239 and older versions Java Agent API (zip) 4.5.18.29239 and older versions OpenTracing Tracer (zip) 4.5.13.27526 Or, does "Java Agent - Sun and JRockit JVM" work with any OpenJDK, such as openjdk-8-jdk on Ubuntu 18.04?