All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We are in a multi-site indexer cluster environment, and we are going to upgrade our infrastructure from 3 Indexer to 6 Indexer. Basically we will add 6 new indexers, and decommission the 3 o... See more...
Hello, We are in a multi-site indexer cluster environment, and we are going to upgrade our infrastructure from 3 Indexer to 6 Indexer. Basically we will add 6 new indexers, and decommission the 3 old indexers. Do you know which is the speed in MBytes/s that will be reached once the data will start to be copied from the 3 old Indexers to the 6 new indexers? We did some test in our development environment, where we have simulated a similar scenario, and it seems it started copying very fast with 170 MBytes/s peak (we have checked it with an nmon session at the source Indexer machine). In order to start the copying process we had: added the new Indexers to the Master Node switched the Splunk HFs to forward data to the new Indexers run, one by one on each old Indexer, the command splunk offline --enforce-counts Before starting the test we have read the following documentation: https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Takeapeeroffline#Estimate_the_cluster_recovery_time_when_a_peer_gets_decommissioned In the official documentation it is mentioned: 10GB (rawdata and/or index files) from one peer to another across a LAN takes about 5-10 minutes therefore copying speed should go from 136 Mbit/s to 272 Mbit/s. If the speed we have observed is correct (more than 1Gbit/s), do you know if there is any way to limit the output bandwidth? Thanks a lot, Edoardo
Hi all, I have an alert which is sending an email to all users but i need to add HTML tags to the content. For example, I need to add images and make some text bold, colored and stuff like that. ... See more...
Hi all, I have an alert which is sending an email to all users but i need to add HTML tags to the content. For example, I need to add images and make some text bold, colored and stuff like that. When I add the tags and send the mail it doesn't work, does someone know how I could use it? I'm obligated to add it in the mail. Thank you, Sasquatchatmars
Hey Everybody, We just started a SmartStore migration project and we were warned to not migrate indexes with buckets with auto_high_volume.  So we would like to change the maxDataSize settings in c... See more...
Hey Everybody, We just started a SmartStore migration project and we were warned to not migrate indexes with buckets with auto_high_volume.  So we would like to change the maxDataSize settings in case of several already existing index from auto_high_volume to auto in an IDX cluster. What will be the impact of this change?  Will it cause increased I/O or other resource usage right after the change, will it “cut” the already existing high_volume buckets into 750 MB pieces? Aka. Will it be applied retrospectively? What will happen a hot bucket what is more than 750 MB at the time when I apply the change?   Thank you in advance, Tamas
Hello, We are planning to move from Single instance installation to cluster(1SH + 3 INDEXER) we have 50+ supporting apps downloaded from splunkbase. We use one primary app which has all the main co... See more...
Hello, We are planning to move from Single instance installation to cluster(1SH + 3 INDEXER) we have 50+ supporting apps downloaded from splunkbase. We use one primary app which has all the main configuration like props, transform, dashboards, datamodels etc. we will be distributing this primary app to indexer using configuration bundle. I wanted to know if we need to push all other supporting appls like Lookup Editor, Calender_App etc ? As per my understanding these supporting app/add-on only be used on search head. so do I need to push these to all indexer ? example: Thanks      
hi, I have events like this : log=log_name {"timestamp":"2020-10-13T13:44:06.242Z","version":"1","message":"xxx","name":"abcd","level":"INFO","id":"123","env":"dev"} I have set up a I have set up ... See more...
hi, I have events like this : log=log_name {"timestamp":"2020-10-13T13:44:06.242Z","version":"1","message":"xxx","name":"abcd","level":"INFO","id":"123","env":"dev"} I have set up a I have set up a props.conf :  [sourcetype] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)log\= TRUNCATE = 999999 TRANSFORMS-extractions = indexed_log TRANSFORMS-remove = remove_log   And a transforms.conf :  [indexed_log] REGEX = ^log\=(.+?)\s FORMAT = log::$1 WRITE_META = true [remove_log] REGEX = ^log\=.+?\s((?:\{|\[).+?(?:\}|\]))$ DEST_KEY = _raw FORMAT = $1   That works good until I found some events are not parsed. The only difference I noticed is that there is a field that contains a lot of characters (more than 6000). Log not parsed example :  log=test {"timestamp":"2020-10-13T12:10:57.177Z","version":"1","message":"Error ","name":"1234","level":"ERROR","field_with_characters_above_6000":"abcdef...….","env":"dev"}    The props.conf and transforms.conf are not effective anymore on that kind of event.   Can you help me please ?
I have data coming from an Avaya phone system that provides me the end time of the event and the duration, I am creating a start_time field based on those fields. I get a table with the trunk number... See more...
I have data coming from an Avaya phone system that provides me the end time of the event and the duration, I am creating a start_time field based on those fields. I get a table with the trunk number, start time, duration, and end time.   I am trying to get trunk capacity, so I need to know how many events are listed as occurring between the start time and end time.        index="voice_switch" | eval Trunk = mvappend(Incoming_Trunk_Group, Outgoing_Trunk_Group) | eval Dur = (DurH * 60 *60) + (DurM*60) + (DurTenths *6) | eval Start_Time = _time - Dur | eval Duration = tostring(Dur, "duration") | convert ctime(Start_Time) | convert ctime(_time) | rename _time as End_Time | table Trunk,Start_Time, Duration,End_Time     Trunk Start_Time Duration End_Time 8940 10/13/2020 07:59:58 00:23:48 10/13/2020 08:23:46 8940 10/13/2020 08:02:41 00:26:06 10/13/2020 08:28:47 8905 10/13/2020 08:05:57 00:16:54 10/13/2020 08:22:51 8940 10/13/2020 08:08:14 00:14:00 10/13/2020 08:22:14 8940 10/13/2020 08:08:18 00:14:36 10/13/2020 08:22:54 8905 10/13/2020 08:08:53 00:13:00 10/13/2020 08:21:53 8940 10/13/2020 08:09:37 00:26:18 10/13/2020 08:35:55 8940 10/13/2020 08:11:01 00:12:30 10/13/2020 08:23:31 8901 10/13/2020 08:11:22 00:11:18 10/13/2020 08:22:40 8940 10/13/2020 08:11:37 00:11:36 10/13/2020 08:23:13  
Hi Everyone, Please help us on this    How to download the file from AWS S3 Bucket from Splunk HTML Dashboard button Click?   Regards, Manikanth
Hello All,  I have a requirement to display the search query time range in the body of the email alert, is there a way i can do that?  Search: index="ABC" source=XYZ earliest=-3month latest=now| t... See more...
Hello All,  I have a requirement to display the search query time range in the body of the email alert, is there a way i can do that?  Search: index="ABC" source=XYZ earliest=-3month latest=now| table ClientId Restricted Success Rejected Failed Total    I want to display the time range that my search considered in the email alert.  Thank you
Hello i would like splunk to show some caption for the x-axis.  As you can see, displaying all ticks would be too cramped. So, Splunk decides to show None. However, still some ticks would be helpf... See more...
Hello i would like splunk to show some caption for the x-axis.  As you can see, displaying all ticks would be too cramped. So, Splunk decides to show None. However, still some ticks would be helpful. Is it possible to set the number of ticks shown?       base search | chart agg(field_geberdifferenz) as "Geberdifferenz Hubwerk" by Blockstelle   or    base search | stats agg(field_geberdifferenz) as "Geberdifferenz Hubwerk" by Blockstelle   leads to this plot. I can't share any data though. Thanks
Hello, I have the following data book="title1" reader="reader1" book="title1" reader="reader1" book="title1" reader="reader2" book="title1" reader="reader2" book="title2" reader="reader1" book... See more...
Hello, I have the following data book="title1" reader="reader1" book="title1" reader="reader1" book="title1" reader="reader2" book="title1" reader="reader2" book="title2" reader="reader1" book="title2" reader="reader3" book="title2" reader="reader3" book="title2" reader="reader3" ...   I'd like to represent it in a multi-series bar chart showing number of reads by reader per title as in a drawing below: count ^ |   _____________     _____________ |   |                                |     |                                 | |   |_reader1_____|     |                                 | |   |                                |     |_reader3_____ | |   |_reader2_____|     |_reader1_____ | ----------------------------------------------------> titles          title1                                   title 2
Hello All, We have configured our monitoring tools to have Network and Application alert events to be sent as SNMP traps.  Splunk monitors /var/log/snmp-traps.log file, parses data and indexes the... See more...
Hello All, We have configured our monitoring tools to have Network and Application alert events to be sent as SNMP traps.  Splunk monitors /var/log/snmp-traps.log file, parses data and indexes them, no problem there. All necessary fields for "Correlation Search" are present (severity, title, etc), "Notable Events" are created by the ad-hoc correlation searches, searches are run for 1 minute window, also there is no problem here. However breaking rules are not working as expected, for example there are multiple "Episodes" for same events are starting with exact same starting event, they may break prematurely and end up having more than one Episodes for the same starting event. We also observed that there are some Episodes getting just one event and never getting closed.  We have experimented with almost every combination in "Aggregation Policies" What is going on here? Why does it get confused, I know that is hard to understand without looking actual settings and configuration but I did my best to understand documents and setting up the whole policy.  Did anyone else here had this issue?
Hi All, have this dilemma where source counts does not match the count inserted in summary index. sample query that was used -  Base search: index=sample_index | rex mode=sed field=author"s/(\w|\... See more...
Hi All, have this dilemma where source counts does not match the count inserted in summary index. sample query that was used -  Base search: index=sample_index | rex mode=sed field=author"s/(\w|\d|[\D\W])/*/g" | eval raw_event=_raw | rex mode=sed field=raw_event"s/(:?author\=[\w|\d|\D\W]+)/author= *********/g" | fields user owner ip mac_address input_file dest_file log_name orig_time orig_sourcetype act category default message message_id raw_mac severity tag vendor product then summary indexing is enabled. runs every 30 minutes that gets past 30 minutes. Validation: if base search is used, for example, it has a result of 100k events. when checked in summary index it has only 50% or less inserted. Note that not all fields are present in all events. example for owner field, it has 3 Values, 17.377% of events.   Question: does it summarize the fields being inserted to the summary index, where it drops the fields with null values? or is that the expected behavior from summary indexing?   Thanks!    
Dear All, I'm very new to Splunk! In my organization, Splunk Enterprise was deployed and the management want to monitor all the data platforms, applications in Splunk. Lately, I have deployed Clou... See more...
Dear All, I'm very new to Splunk! In my organization, Splunk Enterprise was deployed and the management want to monitor all the data platforms, applications in Splunk. Lately, I have deployed Cloudera CDP 7.1.3 in our data center.  Management is expecting Splunk to analyze Hadoop Log files. How to use Splunk to proactively monitor the user activities, service logs and server logs in CDP 7.1.3? Is there any additional component required?   Appreciate if you can share your knowledge on it!   Thanks
Hi, I want to mask or replace all the words in my file with some specific word. EX:Myfile.csv "My splunk architecture consists of 5 servers" I want all the words in Myfile.csv to be replaced like... See more...
Hi, I want to mask or replace all the words in my file with some specific word. EX:Myfile.csv "My splunk architecture consists of 5 servers" I want all the words in Myfile.csv to be replaced like below "splunk splunk splunk splunk splunk splunk splunk" Currently using the below props.conf  [sourcetype] SEDCMD-replace_words_with_splunk = s/\S++/splunk/ But only first word of my file is getting replaced Could anyone suggest me a way to capture all words in my file and  replace all the words with any other word before indexing?  
I have the below log text     2020-10-12 12:30:22.538 INFO 1 --- [ener-4] c.t.t.o.s.service.recServi : received users : {"userId":"12333","userType":"Normal"} 2020-10-12 12:30:22.538 INFO 1 --- [e... See more...
I have the below log text     2020-10-12 12:30:22.538 INFO 1 --- [ener-4] c.t.t.o.s.service.recServi : received users : {"userId":"12333","userType":"Normal"} 2020-10-12 12:30:22.538 INFO 1 --- [ener-4] c.t.t.o.s.service.recServi : Received usertype is:Normal 2020-10-12 12:30:22.540 INFO 1 --- [ener-4] c.t.t.o.s.s.ReceiverPrepaidService : Validating the User with userID:1233 systemID:111wdsa 2020-10-12 12:30:22.540 INFO 1 --- [ener-4] c.t.t.o.s.util.Common : The Reason Code is valid for UserId: 12333 userId:12333 2020-10-12 12:30:22.577 INFO 1 --- [ener-4] c.t.t.o.s.r.OlServiceValidatorDao : Saving User into DB ..... with User-ID:12333   ........   again same type of lines    I need to extract the userId and timestamp from    line : Validating the User with userID:1233 systemID:111wdsa   I am able to extract userId and group by it with count   index="tim" logGroup="/ecs/strr" "logEvents{}.message"="*Validating the User with userID*" | spath output=myfield path=logEvents{}.message | rex field=myfield "(?<=Validating the User with userID*:)(?<userId>[0-9]+)(?= systemID:)"  table userId | dedup userId | stats count values(userId) by userId     but can not extract the time stamp and create the time chart with userId group by timestamp from all log text   Any help would really help ful for us    
Hello, I am trying to calculate the browse time and bandwith usage of users by looking at the log files of the firewall. As far as i can understand the best way to this is to use transaction command... See more...
Hello, I am trying to calculate the browse time and bandwith usage of users by looking at the log files of the firewall. As far as i can understand the best way to this is to use transaction command.  However, to make the transaction command more efficient, i tried to use it with tstats (which may be completely wrong). my assumption is that if there is more than one log for a source IP to a destination IP for the same time value, it is for the same session.  Here is my query: | tstats sum(datamodel.mbyte) as mbyte from datamodel=datamodel by _time source destination | transaction source destination maxpause=1m My questions are: is there a more efficient way to calculate these values? Max duration value for my query is always equals to maxpause value. Shouldn't be values greater than maxpause.  Thanks in advance
we have tons of unique records per day, so when I'm querying for the last 15 or last 30 days the dropdown is getting slow down. Is there any solution like that when I will "TYPE" for the record w... See more...
we have tons of unique records per day, so when I'm querying for the last 15 or last 30 days the dropdown is getting slow down. Is there any solution like that when I will "TYPE" for the record which is starting with the word "ADF" on the dropdown and it should auto-populate with only the top 3 or top 5 records? or Is there any other way to fix this dropdown performance issue?
I have CSV inventory  file which is dynamic and same needs to updated in splunk manually, Is there a way  to integrate the URL  with splunk  to update lookup file 
Hi, Im trying to filter the results of some system in timechart and don't succeed to do that. suppose i have 5 systems in the chart, i.e. there are 5 graphs in the chart. i want to show only the gr... See more...
Hi, Im trying to filter the results of some system in timechart and don't succeed to do that. suppose i have 5 systems in the chart, i.e. there are 5 graphs in the chart. i want to show only the graph of one or more systems by clicking on the graph or on the legend of the system. how can i do it?
We have enabled the jobs to pull the records from each of the tables, post which we have created a report/dashboard as per our requirement. We could see that for few tickets the data is not being ind... See more...
We have enabled the jobs to pull the records from each of the tables, post which we have created a report/dashboard as per our requirement. We could see that for few tickets the data is not being indexed with the latest details. Say for example, a ticket number = INC101023, is closed in Service Now on 20-Apr-2020, but in the Splunk index it is showing as "Work in Progress" with the date as 15-Apr-2020. Can you please let us know, how to retrieve the missing data?    I am getting the below errors in snow logs :  ValueError: Expecting : delimiter: line 1 column 1791875 (char 1791874) 2020-10-10 14:30:59,831 ERROR pid=20332 tid=Thread-29 file=snow_data_loader.py:collect_data:170 | Failure occurred while getting records from https://henkelprod.service-now.com/change_request. The reason for failure= , u'detail': u'maximum execution time exceeded Check logs for error trace or enable glide.rest.debug property to verify REST request processing'}. Contact Splunk administrator for further information.   2020-10-10 14:31:34,604 ERROR pid=20332 tid=Thread-24 file=snow_data_loader.py:collect_data:170 | Failure occurred while getting records from https://henkelprod.service-now.com/rm_defect. The reason for failure= {u'message': u'Transaction cancelled: maximum execution time exceeded', u'detail': u'maximum execution time exceeded Check logs for error trace or enable glide.rest.debug property to verify REST request processing'}. Contact Splunk administrator for further information.    2020-10-10 15:00:59,950 ERROR pid=20332 tid=Thread-29 file=snow_data_loader.py:collect_data:170 | Failure occurred while getting records from https://henkelprod.service-now.com/change_request. The reason for failure= {u'message': u'Transaction cancelled: maximum execution time exceeded', u'detail': u'Transaction cancelled: maximum execution time exceeded Check logs for error trace or enable glide.rest.debug property to verify REST request processing'}. Contact Splunk administrator for further information.   020-10-13 12:28:51,678 ERROR pid=8752 tid=Thread-39 file=snow_data_loader.py:_json_to_objects:268 | Obtained an invalid json string while parsing.Got value of type <type 'str'>. Traceback : Traceback (most recent call last):   File "D:\SplunkProgramFiles\etc\apps\Splunk_TA_snow\bin\snow_data_loader.py", line 265, in _json_to_objects     json_object = json.loads(json_str)   File "D:\SplunkProgramFiles\Python-2.7\Lib\json\__init__.py", line 339, in loads     return _default_decoder.decode(s)   File "D:\SplunkProgramFiles\Python-2.7\Lib\json\decoder.py", line 364, in decode     obj, end = self.raw_decode(s, idx=_w(s, 0).end())   File "D:\SplunkProgramFiles\Python-2.7\Lib\json\decoder.py", line 380, in raw_decode     obj, end = self.scan_once(s, idx) ValueError: Expecting : delimiter: line 1 column 972953 (char 972952)