All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I hava data in statistics.. it has the below format:   START Request Id: 62529168377 :$LATEST {"Name": "abc","Alarm":"alarm from console" , "Trigger":{"Mname":"pqr", "Dimensions":[{"value": "xyz",... See more...
I hava data in statistics.. it has the below format:   START Request Id: 62529168377 :$LATEST {"Name": "abc","Alarm":"alarm from console" , "Trigger":{"Mname":"pqr", "Dimensions":[{"value": "xyz", "name": "efg"},{"value":"hji","value":"lkh"}] } }   I have to fetch the values inside the Dimensions. How can i i achieve this?   Thanks in advance.
Hi, I am trying to add the values from 2 array functions to get the overall sum. | eval {1_month_last_day_prior} = case((StartDateEpoch < '1_month_last_day_prior_epoch') AND (StartDateEpoch > '2_mo... See more...
Hi, I am trying to add the values from 2 array functions to get the overall sum. | eval {1_month_last_day_prior} = case((StartDateEpoch < '1_month_last_day_prior_epoch') AND (StartDateEpoch > '2_month_last_day_prior_epoch'), additionalBillingRate2, StartDateEpoch > '1_month_last_day_prior_epoch', "0",1=1,MRR) | eval {2_month_last_day_prior} = case((StartDateEpoch < '2_month_last_day_prior_epoch') AND (StartDateEpoch > '3_month_last_day_prior_epoch'), additionalBillingRate2, StartDateEpoch > '2_month_last_day_prior_epoch', "0",1=1,MRR) The below is not working and im not sure what syntax I should be using: | eval sum='{1_month_last_day_prior}' + '{2_month_last_day_prior}' Any ideas?  
Hi, I tried search some data from logs using this statement:   index=* sourcetype="mySource" Types* | stats count by Types   in result I receive table like this: Type1 5 Type2 4 Typ... See more...
Hi, I tried search some data from logs using this statement:   index=* sourcetype="mySource" Types* | stats count by Types   in result I receive table like this: Type1 5 Type2 4 Type3 1 I know that in the future in logs can occur Type4 so I would like to add it in to serach result by force. I tried some lookup stuff but i cant use it properly to get expected result. So for now I would like to have table like this: Type1 5 Type2 4 Type3 1 Type4 0 Thanks in advice for help.
I have to exclude all subject with some similar set of words in subject. Eg. Inc00452| RE: Exchange 2K16: Alert: Processor > % Processor Time So I have to exclude all subject with 'Alert: Processor... See more...
I have to exclude all subject with some similar set of words in subject. Eg. Inc00452| RE: Exchange 2K16: Alert: Processor > % Processor Time So I have to exclude all subject with 'Alert: Processor > % Processor Time' So all subject with above keyword should be excluded  
Hello,   i have objects with names that all carry a unique and constant "Software-Signature" with them. This signature is supposed to never change. And i know that it is in its original state at s... See more...
Hello,   i have objects with names that all carry a unique and constant "Software-Signature" with them. This signature is supposed to never change. And i know that it is in its original state at some timestamp. Now, i want to create a dashboard that displays the objects current signature, its original signature and if they are identical. makeresults| eval Identical = if(sig_orig = sig_current, 1, 0) | table name sig_orig sig_current Identical |append[ search index=my_index earliest=".." latest=".."| stats values(Signatur) as sig_orig by name |appendcols [ search index=my_index | stats latest(Signatur) as sig_current by name ] ]   This works besides the fact that the field identical displays nothing. Assuming, there is deviation and you find a 0, as in the two signatures are not identical. You may want to find when that occured, so i would like to make timechart of the identical-field by name.   Thank you in advance, and i hope i managed to describe the task clearly.   
   As Per below screenshot, i getting results the difference between last week host and this week host count. But i want list of the difference servers in splunk search. How can i retreive the list ... See more...
   As Per below screenshot, i getting results the difference between last week host and this week host count. But i want list of the difference servers in splunk search. How can i retreive the list of all 23 servers which arr having diffrence from last week count to this week count.      
I have a feed that has the host field defaulting to what is essentially the sourcetype, but in the shost field I have all the host data that I want/need in the host field.  I want to know what is eas... See more...
I have a feed that has the host field defaulting to what is essentially the sourcetype, but in the shost field I have all the host data that I want/need in the host field.  I want to know what is easier and won't break any of the splunk system configs for the built in host name extraction. Is it better to create a FIELDALIAS to have shost AS host in the props.conf for that dataset or to do something different like a transforms.conf?
Hi! I haven't gotten to the bottom of what permissions are needed for different actions. We would like to give someone with the 'user' role, to be able to access this page, in a read-only manner. "... See more...
Hi! I haven't gotten to the bottom of what permissions are needed for different actions. We would like to give someone with the 'user' role, to be able to access this page, in a read-only manner. "app/splunk_monitoring_console/forwarder_deployment" This, by making a new role that is dedicated to this purpose. Is this possible? I see 'list_forwarders' is permitted from user.  "Monitoring" or "console" does not give much hits in the list.
We received the below error in splunkd.log on our indexer server. We are using cluster env with 6 indexers. The indexers are coming up and down 11-05-2020 07:43:17.734 +0000 WARN TcpInputProc - Stop... See more...
We received the below error in splunkd.log on our indexer server. We are using cluster env with 6 indexers. The indexers are coming up and down 11-05-2020 07:43:17.734 +0000 WARN TcpInputProc - Stopping all listening ports. Queues blocked for more than 300 seconds 11-05-2020 07:43:17.734 +0000 INFO TcpInputProc - Stopping IPv4 port 9995 11-05-2020 07:43:17.734 +0000 INFO TcpInputProc - Stopping IPv4 port 9997
We received the below error in splunkd.log on our indexer server. We are using cluster env with 6 indexers. The indexers are coming up and down 11-05-2020 07:23:03.304 +0000 ERROR TcpInputProc - Err... See more...
We received the below error in splunkd.log on our indexer server. We are using cluster env with 6 indexers. The indexers are coming up and down 11-05-2020 07:23:03.304 +0000 ERROR TcpInputProc - Error encountered for connection from src=X.X.X.X:60116. Broken pipe
i m trying to monitor a  json file with custom sourcetype for line breaking that i have build but not getting events .  But the same json file, if i'll upload with same custom sourcetype it is able t... See more...
i m trying to monitor a  json file with custom sourcetype for line breaking that i have build but not getting events .  But the same json file, if i'll upload with same custom sourcetype it is able to index properly and events are showing with correct parsing. And, also if i monitor the same json file with predefined sourcetype=_json it is able to index the file and showing the events though the parsing is wrong. Any suggestions?  
Hi everyone,   I am using the Splunk_TA_nix to collect unix data on my local single Splunk instance. As far as I understand entities get published from splunk_app_for_infrastructure and then appear... See more...
Hi everyone,   I am using the Splunk_TA_nix to collect unix data on my local single Splunk instance. As far as I understand entities get published from splunk_app_for_infrastructure and then appears as entities in ITSI. But somehow the entities will not get transferred. I have one entitie in infrastructure app and zero in ITSI at the moment. I activatet the data input from Infrastructure App for the app server, my entity is getting recognized, but not pushed to ITSI: Can anybody help? I also checked all troubleshooting steps mentioned here: https://docs.splunk.com/Documentation/ITSI/4.6.2/SAIInt/Troubleshooting Thank you!
hi I wanna deploy multisite cluster with 2 sites, 1 in-branch and another in a datacenter, I have 1 indexer for each site, I wanna each site have a copy of another site if the one site goes down, b... See more...
hi I wanna deploy multisite cluster with 2 sites, 1 in-branch and another in a datacenter, I have 1 indexer for each site, I wanna each site have a copy of another site if the one site goes down, but I really dont understand about site replication factor and search factor. can someone please help me to figure out how to deploy this cluster?   best regards
Hello community and experts, Is it possible to set a max concurrent search for a particular saved search? The use case is that: We have an external system which APIs into Splunk to run a saved se... See more...
Hello community and experts, Is it possible to set a max concurrent search for a particular saved search? The use case is that: We have an external system which APIs into Splunk to run a saved search This saved search searches a particular KV Store, return records and update those records There are two instances of this external system We don't want two instances of the external system to access to the KV Store at the same time If we can set a max concurrent search on this particular saved search, we'd set it to 1 Your feedback would be greatly appreciated! Thanks in advance!
We are currently working on an engagement for a Fortune 500 company which has multiple projects being run by different consulting firms and vendors. Each project has its own network which is accessib... See more...
We are currently working on an engagement for a Fortune 500 company which has multiple projects being run by different consulting firms and vendors. Each project has its own network which is accessible over VPN. We have Splunk ES running for Project 1 in the Project 1 network and it's ingesting various security log sources (i.e. IAM, network traffic, endpoint detection). Now, there is an ask to ingest the security log sources from Project 2 as well. Project 1 and Project 2 environments are separated by network firewalls and currently do not have any connectivity with each other. If I wanted to pull Project 2 logs over to Splunk (Project 1) through various data ingestion mediums, such as syslog, UF's, DBConnect, and HF (for HEC and REST API inputs), would I need to ensure that a network tunnel is setup between Project 1 and Project 2 to allow the flow of data ? At a high level, the network architecture between P1 and P2 looks like this: P2 ---> P2 External Firewall <no connectivity> P1 External Firewall <---- P1
Hello I am trying to send the notable event to jira service desk Data fields such as rule name are transmitted normally. But the event_id field appears blank. Without event_id, I can't come back ... See more...
Hello I am trying to send the notable event to jira service desk Data fields such as rule name are transmitted normally. But the event_id field appears blank. Without event_id, I can't come back to a notable event. Then no further analysis, such as investigation How can I add an event_id or link related to the notable event in jira's ticket? Thank you.
Hi Everyone: I'd like to extract everything after the third "/" below (starting from the left) in the url field below: url=http://4.3.3.4/pld_accepted_business " Note: http://4.3.3.4/  will be con... See more...
Hi Everyone: I'd like to extract everything after the third "/" below (starting from the left) in the url field below: url=http://4.3.3.4/pld_accepted_business " Note: http://4.3.3.4/  will be constant. The latter may change between pld_accepted_business  or pld_accepted_non_business" Any assistance would be greatly appreciated.
Hi @LukeMurphey  I am implementing this app to our environment and I was wondering if it is possible to do a few things? use a wildcard in the file_path inputs ? e.g. file_path = C:\Users\...\Chr... See more...
Hi @LukeMurphey  I am implementing this app to our environment and I was wondering if it is possible to do a few things? use a wildcard in the file_path inputs ? e.g. file_path = C:\Users\...\Chrome OR file_path=C:\Users\*\Local use a blacklist to ignore certain file types ? e.g. blacklist1 = lnk$ Thanks in advance
Hello, I am working with historical log data from a train system and I have two different types of log files: log1: each row is an event that was logged every time a train arrived at a station.  ... See more...
Hello, I am working with historical log data from a train system and I have two different types of log files: log1: each row is an event that was logged every time a train arrived at a station.  log2: each row is an event that was logged every time a train station sign displayed a message. The messages predicted how many minutes it will take for the next train to arrive. My goal:  Add a new column to the log2 events which is a calculated field. The calculation is the difference between the predicted arrival time (time of log2 event + predicted minutes until arrival) and the actual arrival time (time of the log1 event which corresponds to the log2 event). My proposed solution: For each log1 row, generate a table of all the corresponding log2 rows. Corresponding means: The log2 timestamp is within a 30 minute time range BEFORE the log1 timestamp. The log2 "next_serial" field is equal to the log1 "serial" field. The log2 "platform" field is equal to the log1 "platform" field. In other words, for every train that arrived, I will list all the logged train station sign messages that predicted the arrival time of that train.  After creating the table of log2 events specified above, I will calculate the new field, which I will call "prediction_deviation", for all the log2 rows in the table. I am able to do this because I now know which log1 event all the rows in this table correspond with. Question: Can someone please help me implement my proposed solution, or suggest a better method to create this calculated field? Which SPL commands can I use? I have been looking at the documentation for the stats, eval, and transaction commands, but I have not found a way to use these to solve my problem. I would appreciate any help or guidance. Thank you.
I believe as with all things Splunk, there is more than one way to solve this My data consists of this   | makeresults | eval ml=mvrange(1,4) | mvexpand ml | eval achieved=random() % 2 | table ml ... See more...
I believe as with all things Splunk, there is more than one way to solve this My data consists of this   | makeresults | eval ml=mvrange(1,4) | mvexpand ml | eval achieved=random() % 2 | table ml achieved   What I want is to find the highest value of ml where achieved=1 and where there has not been achieved=0 in a lower value of ml. I have worked out that this works, but I would like to see if there is a different solution as this seems a bit overly complicated   | makeresults | eval ml=mvrange(1,4) | mvexpand ml | eval achieved=random() % 2 | table ml achieved | eval x=if(achieved=0,0,null) | filldown x | eventstats max(eval(if(isnull(x),ml,null))) as y | head 1 | eval max_ml=if(isnull(y),0,y)   where max_ml will give the desired outcome. Anyone else see an alternative solution. Note mvrange size can be any size, so not just 3 values.