Activity Feed
- Posted Re: How to disable csv replication in the Search Head cluster. on Splunk Enterprise. 02-04-2021 12:18 AM
- Tagged How to disable csv replication in the Search Head cluster. on Splunk Enterprise. 02-03-2021 04:57 AM
- Posted How to disable csv replication in the Search Head cluster. on Splunk Enterprise. 02-03-2021 01:08 AM
- Karma Re: How to generate a timechart from multiple data sources? for somesoni2. 06-05-2020 12:48 AM
- Karma Re: How to troubleshoot Search Head Clustering initial bootstrap failing with error "found different peer with serverName and hostport already registered and UP"? for rbal_splunk. 06-05-2020 12:47 AM
- Karma Re: How to troubleshoot Search Head Clustering initial bootstrap failing with error "found different peer with serverName and hostport already registered and UP"? for rbal_splunk. 06-05-2020 12:47 AM
- Karma Re: Moving manual rex to props.conf and transforms.conf for martin_mueller. 06-05-2020 12:47 AM
- Karma Re: Why am I getting "WARN AuthorizationManager - Unknown role" errors in splunkd.log after deleting the VMware and Windows Infrastructure apps? for bravon. 06-05-2020 12:47 AM
- Karma Re: splunkd.log error message for tmarlette. 06-05-2020 12:46 AM
- Karma License pools for Indexes rather than Indexers for Damien_Dallimor. 06-05-2020 12:46 AM
- Karma Re: License pools for Indexes rather than Indexers for Glenn. 06-05-2020 12:46 AM
- Karma Re: SPLUNK DB Connect: Timestamp Not Working for pmagee. 06-05-2020 12:46 AM
- Got Karma for Re: How to retrieve data from Tibco EMS with jms_ta. 06-05-2020 12:46 AM
- Karma Re: Does Splunk index gzip files? for hulahoop. 06-05-2020 12:45 AM
- Karma Re: What is the OTHER field? for Lowell. 06-05-2020 12:45 AM
- Karma Re: Change Logo on Login Screen for vcarbona. 06-05-2020 12:45 AM
- Posted Re: Error while search data from Hadoop on All Apps and Add-ons. 06-01-2018 02:00 AM
- Posted Re: Error while search data from Hadoop on All Apps and Add-ons. 05-25-2018 12:29 AM
- Posted Re: Error while search data from Hadoop on All Apps and Add-ons. 05-23-2018 11:03 PM
- Posted Error while search data from Hadoop on All Apps and Add-ons. 05-18-2018 04:37 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
02-04-2021
12:18 AM
Hello! Thank you for your participation, but unfortunately I have already tried this option earlier 🙂 As I understand it, this setting works only when transferring a bundle from SHC to Indexer nodes. distsearch.conf
... View more
02-03-2021
01:08 AM
Good afternoon, community. There was a need to remove lookup files from replication between Search Heads (version 8.1.2). Tried tweaking the server.conf file and setting the values: conf_replication_include.lookups = false conf_replication_summary.blacklist.lookups = (system | (apps / *) | users (/ _ reserved)? / * / *) / lookups / * If the lookup file is created through the UI, it remains local, but unfortunately this does not help when using the outputlookup command and the file is distributed across the cluster. Btool on search head: splunk btool --debug server list | grep lookup Search Head Clustering: Configuration Replication (when using the outputlookup command): Perhaps you have any ideas where else to pay attention to completely close the possibility of replicating lookup files?
... View more
- Tags:
- search head cluster
Labels
- Labels:
-
administration
-
configuration
05-25-2018
12:29 AM
cloudera yarn configuration - https://ibb.co/f54KD8
splunk provider configuration - https://ibb.co/bxOCY8
cloudera iptables - https://ibb.co/iKkgRT
... View more
05-18-2018
04:37 AM
Hello!
I try Splunk Analytics on Hadoop in the test zone.
Configured the provider, configured the virtual index, with a simple search (index = hadoop_test) the result is returned and everything is fine.
But when I add additional conditions, for example source / sourcetype always returns the following error.
2 errors occurred while the search was executing. Therefore, search results might be incomplete. Hide errors.
[5_13] Error while running external process, return_code=255. See search.log for more info
[5_13] Exception - java.lang.RuntimeException: summary_id did not exist in search info: {_tz=### SERIALIZED TIMEZONE FORMAT 1.0;Y9017 NW 4C 4D 54;Y9017 NW 4D 4D 54;Y12679 YW 4D 53 54;Y9079 NW 4D 4D 54;Y16279 YW 4D 44 53 54;Y14400 YG 4D 53 44;Y10800 NW 4D 53 4B;Y14400 YW 4D 53 44;Y18000 YW 2B 30 35;Y7200 NW 45 45 54;Y10800 NS 4D 53 4B;Y14400 YS 4D 53 44;Y10800 YS 45 45 53 54;Y7200 NS 45 45 54;Y14400 NS 4D 53 4B;Y14400 YS 4D 53 44;Y10800 NS 4D 53 4B;@-2840149817 1;@-1688265017 3;@-1656819079 2;@-1641353479 3;@-1627965079 4;@-1618716679 2;@-1596429079 4;@-1593820800 5;@-1589860800 6;@-1542427200 7;@-1539493200 8;@-1525323600 7;@-1522728000 6;@-1491188400 9;@-1247536800 6;@354920400 7;@370728000 6;@386456400 7;@402264000 6;@417992400 7;@433800000 6;@449614800 7;@465346800 10;@481071600 11;@496796400 10;@512521200 11;@528246000 10;@543970800 11;@559695600 10;@575420400 11;@591145200 10;@606870000 11;@622594800 10;@638319600 11;@654649200 10;@670374000 12;@686102400 13;@695779200 10;@701823600 11;@717548400 10;@733273200 11;@748998000 10;@764722800 11;@780447600 10;@796172400 11;@811897200 10;@828226800 11;@846370800 10;@859676400 11;@877820400 10;@891126000 11;@909270000 10;@922575600 11;@941324400 10;@954025200 11;@972774000 10;@985474800 11;@1004223600 10;@1017529200 11;@1035673200 10;@1048978800 11;@1067122800 10;@1080428400 11;@1099177200 10;@1111878000 11;@1130626800 10;@1143327600 11;@1162076400 10;@1174777200 11;@1193526000 10;@1206831600 11;@1224975600 10;@1238281200 11;@1256425200 10;@1269730800 11;@1288479600 10;@1301180400 14;@1414274400 10;$, now=1526642535.000000000, _sid=1526642535.12, site=default, _api_et=1526554800.000000000, _api_lt=1526642535.000000000, _dsi_id=0, _keySet=index::hadoop_test source::/user/splunk/anaconda.storage.log, _ppc.bs=$SPLUNK_ETC, _search=search index=hadoop_test source="/user/splunk/anaconda.storage.log", _shp_id=11259D3D-008A-4FB8-A329-A38D5B1D948A, _endTime=1526642535.000000000, _ppc.app=search, read_raw=1, realtime=0, _countMap=duration.command.search.expand_search;57;duration.command.search.parse_directives;0;duration.dispatch.evaluate.search;68;duration.startup.configuration;9;duration.startup.handoff;43;invocations.command.search.expand_search;1;invocations.command.search.parse_directives;1;invocations.dispatch.evaluate.search;1;invocations.startup.configuration;1;invocations.startup.handoff;1;, _ppc.user=admin, check_dangerous_command=1, generation_id=0, _bundle_version=0, indexed_realtime=0, search_can_be_event_type=1, indexed_realtime_offset=0, kv_store_settings=hosts;127.0.0.1:8191\;;local;127.0.0.1:8191;read_preference;11259D3D-008A-4FB8-A329-A38D5B1D948A;replica_set_name;11259D3D-008A-4FB8-A329-A38D5B1D948A;status;ready;, _timeline_events_preview=0, is_cluster_slave=0, internal_only=0, is_batch_mode=0, _remote_search=search (index=hadoop_test source="/user/splunk/anaconda.storage.log") | fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server", summary_stopped=0, _search_metrics={"ConsideredBuckets":0,"EliminatedBuckets":0,"ConsideredEvents":0,"TotalSlicesInBuckets":0,"DecompressedSlices":0,"FieldMetadata_Events":"","Partition":{}}, _is_summary_index=0, _search_StartUp_Spent=0, _is_keepalive=0, _is_scheduled=0, _splunkd_port=8089, _is_export=0, _is_remote=0, _maxevents=0, _search_et=1526554800.000000000, _search_lt=1526642535.000000000, _startTime=1526554800.000000000, _timestamp=1526642535.172861000, is_saved_search=0, is_remote_sorted=0, _search_StartTime=1526642535.172061000, remote_log_download_mode=disabledSavedSearches, kv_store_additional_settings=hosts_guids;11259D3D-008A-4FB8-A329-A38D5B1D948A\;;, _rt_batch_retry=0, _auth_token=BDOelxDhBhVAs2afGYwCyerDTllb3LxtQFDtYTsESoRbSmfIDrM90g5OfDA8AWFX1lf0la5ejNDf59RIlvzTWkY3fGSaSx3gi_8xF^20lo7Qlhi^^Ug4yoWMBAdo, _drop_count=0, _provenance=UI:Search, _scan_count=0, is_shc_mode=0, rt_backfill=0, sample_seed=0, _bs_thread_count=1, _retry_count=0, _splunkd_uri=https://127.0.0.1:8089, replay_speed=0, _exported_results=0, sample_ratio=1, summary_mode=none, _query_finished=1, _optional_fields_json={}, enable_event_stream=1, _splunkd_protocol=https, _read_buckets_since_startup=0, _bs_pipeline_identifier=0, _request_finalization=0}
Search.log shows the same error, just like the Java error stack.
Can anyone tell me how to solve this problem? 🙂
... View more
02-09-2018
12:17 AM
The web interface will definitely not work. I can not say anything about receiving the data.
... View more
06-13-2017
04:54 AM
Sorry, error on copy paste. Correct search string:
index="iot" [ | inputlookup "transaction.csv" | return 10000 $transaction_name] | rex "transaction name: (?<transaction_name>\S+)" | table transaction_name
But the structure of the message is the same? I mean "transaction name: Workflow".
... View more
06-13-2017
03:38 AM
When you using "table" command you must specify field name.
To make your search work please modify it to:
index="iot" [ | inputlookup "transaction.csv" | return 10000 $transaction_name] | rex "transaction name: (?\S+)" | table transaction_name
And you get a text search, then create a field and a table based on the field.
... View more
06-13-2017
03:20 AM
Or that 🙂
index=*
[| inputlookup transaction.csv
| return 10000 $search]
| rex "transaction name: (?<transaction_name>\S+)"
| stats count by index,transaction_name
... View more
03-03-2017
07:55 AM
No problem 🙂 I'm glad to hear that your problem has been solved.
... View more
03-03-2017
05:18 AM
Plz try that.
index=app_ops_prod host=abcdefgh source="Test.log" SessionID="*" | timechart span=1m count(SessionID) | appendcols [search index=app_ops_prod host=abcdefgh source="Test.log" ("error.badurl" OR "error.timeout") | timechart span=1m count]
Also on the chart, you can add the chart overlay to better illustrate your data.
... View more
03-02-2017
09:46 PM
In this case, it may be worth to install a separate Splunk server and use it only for modular inputs that do not support or are not designed for clustering, so as not to lose the opportunity to configure them via a web interface?
... View more
03-02-2017
03:24 AM
How indexing works
You can set up your cluster so that all the peer nodes ingest external data. This is the most common scenario. You do this simply by configuring inputs on each peer node. However, you can also set up the cluster so that only a subset of the peer nodes ingest data. No matter how you disperse your inputs across the cluster, all the peer nodes can, and likely will, also store replicated data. The master determines, on a bucket-by-bucket basis, which peer nodes will get replicated data. You cannot configure this, except in the case of multisite clustering, where you can specify the number of copies of data that each site's set of peers receives.
... View more
03-02-2017
02:12 AM
Have you tried manually install app on index nodes? And use files from master-apps for configuration only.
... View more
12-23-2015
05:36 AM
For me helped:
During running Splunk I have deleted csv files and restarted Splunk process. No more log errors
... View more
04-13-2015
04:33 AM
1 Karma
Answer for me:
1) Copy jar from ems to $app_dir/bin/lib/:
jms-2.0.jar
tibjms.jar
tibjmsapps.jar
tibjmsadmin.jar
2) Create connection with parametrs:
Destination Type - queue
JMS Topic or Queue name to index messages from - njams.events
Initialisation Mode - JNDI
JMS Connection Factory JNDI Name - QueueConnectionFactory
JNDI Initial Context Factory Name - com.tibco.tibjms.naming.TibjmsInitialContextFactory
JNDI Provider URL - tibjmsnaming://host:port
PS you cant read from dynamic queue.
... View more
04-03-2015
02:28 AM
Damien or Norbert can you share an example of configuration to connect. How to do it? 🙂
... View more
03-16-2015
05:08 AM
Mobile Server Version: 2.0.1-c25ee4e
Android Mobile App: 2.0.1
Same problem.
... View more
03-13-2015
11:35 AM
I sent file for you.
... View more
03-13-2015
11:23 AM
M.b. I saw problems only on hosts with lsb_release utility.
... View more
03-13-2015
11:07 AM
Do you mean nmon file with metrics from server? Or something else?
... View more
03-13-2015
11:00 AM
Ok. We found workaround. 🙂
Plz convert to answer your comment.
... View more
03-13-2015
06:33 AM
Good afternoon.
In determining the version of the operating system periodically are set values obtained from lsb_release. Furthermore, this record is not readable. Is it possible to change the configuration of the dashboards to display a user friendly version?
BBBP,000,/etc/release
BBBP,001,/etc/release,"Red Hat Enterprise Linux Server release 6.4 (Santiago)"
BBBP,002,/etc/release,"Red Hat Enterprise Linux Server release 6.4 (Santiago)"
BBBP,003,/etc/release,"Red Hat Enterprise Linux Server release 6.4 (Santiago)"
BBBP,004,lsb_release
BBBP,005,lsb_release,"LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch"
BBBP,006,lsb_release,"Distributor ID: RedHatEnterpriseServer"
BBBP,007,lsb_release,"Description: Red Hat Enterprise Linux Server release 6.4 (Santiago)"
BBBP,008,lsb_release,"Release: 6.4"
BBBP,009,lsb_release,"Codename: Santiago"
On "NMON Summary Light Analysis" dashboard displayed as " LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch "
In "NMON Inventory Lookup Table":
hostname
OStype
Linux_distribution
Linux_kernelversion
nmon_version
1
Linux
Red Hat Enterprise Linux Server release 6.4 (Santiago)
2.6
14i
2
Linux
Red Hat Enterprise Linux Server release 6.4 (Santiago)
2.6
14i
3
Linux
LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch
2.6 14i
4
Linux
LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch
2.6
14i
5
Linux
LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
2.6
14i
6
Linux
LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
2.6
14i
... View more
03-11-2015
08:06 AM
I apologize link incorrectly displayed.
http://answers.splunk.com/answers/189839/can-you-configure-splunk-mobile-server-if-your-spl-1.html
You can not configure the server using a free license (not a trial). Mobile solution won't work because it requires the ACL feature in Enterprise Edition.
... View more