All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @sabollam  You can use the following to update this within the _raw event at searchtime: | eval _raw=json_set(_raw, "timestamp",strftime(json_extract(_raw,"timestamp")/1000,"%m-%d-%Y %H:%M:%S.%3... See more...
Hi @sabollam  You can use the following to update this within the _raw event at searchtime: | eval _raw=json_set(_raw, "timestamp",strftime(json_extract(_raw,"timestamp")/1000,"%m-%d-%Y %H:%M:%S.%3N"))   However if you want to do this at index time then you need to do the following: == props.conf == [yourSourcetype] TRANSFORM-overrideTimeStamp = overrideTimeStamp == transforms.conf == [overrideTimeStamp] INGEST_EVAL = _raw=json_set(_raw, "timestamp",strftime(json_extract(_raw,"timestamp")/1000,"%m-%d-%Y %H:%M:%S.%3N"))  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@rahulhari88  You configure the site replication factor with the site_replication_factor. site_replication_factor = origin:<n>, [site1:<n>,] [site2:<n>,] ..., total:<n> where: <n> is a positive i... See more...
@rahulhari88  You configure the site replication factor with the site_replication_factor. site_replication_factor = origin:<n>, [site1:<n>,] [site2:<n>,] ..., total:<n> where: <n> is a positive integer indicating the number of copies of a bucket. origin:<n> specifies the minimum number of copies of a bucket that will be held on the site originating the data in that bucket (that is, the site where the data first entered the cluster). When a site is originating the data, it is known as the "origin" site. site1:<n>, site2:<n>, ..., indicates the minimum number of copies that will be held at each specified site. The identifiers "site1", "site2", and so on, are the same as the site attribute values specified on the peer nodes. total:<n> specifies the total number of copies of each bucket, across all sites in the cluster. You configure the site search factor with the site_search_factor site_search_factor = origin:<n>, [site1:<n>,] [site2:<n>,] ..., total:<n> where: <n> is a positive integer indicating the number of searchable copies of a bucket. origin:<n> specifies the minimum number of searchable copies of a bucket that will be held on the site originating the data in that bucket (that is, the site where the data first entered the cluster). When a site is originating the data, it is known as the "origin" site. site1:<n>, site2:<n>, ..., indicates the minimum number of searchable copies that will be held at each specified site. The identifiers "site1", "site2", and so on, are the same as the site attribute values specified on the peer nodes. total:<n> specifies the total number of searchable copies of each bucket, across all sites in the cluster.
Hi @berrybob  Just to confirm - you did CRYPTOGRAPHY_ALLOW_OPENSSL_102=1 in your /opt/splunk/etc/splunk-launch.conf? Did you then restart Splunk? Once this is done this should allow OpenSSL 1.0.2. ... See more...
Hi @berrybob  Just to confirm - you did CRYPTOGRAPHY_ALLOW_OPENSSL_102=1 in your /opt/splunk/etc/splunk-launch.conf? Did you then restart Splunk? Once this is done this should allow OpenSSL 1.0.2.  After this do you still get the exact same error - or is it different?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@rahulhari88  Configuring the Multisite Manager Node splunk edit cluster-config -mode manager -multisite true -site site1 -available_sites site1,site2 -site_replication_factor origin:1,total:2 -sit... See more...
@rahulhari88  Configuring the Multisite Manager Node splunk edit cluster-config -mode manager -multisite true -site site1 -available_sites site1,site2 -site_replication_factor origin:1,total:2 -site_search_factor origin:1,total:2 -secret mycluster Configuring Multisite Cluster Peer Nodes Peer 1&2 splunk edit cluster-config -master_uri https://x.x.x.x:8089 -mode peer -site site1 -replication_port 9100 -secret mycluster Peer 3&4 splunk edit cluster-config -master_uri https://x.x.x.x:8089 -mode peer -site site2 -replication_port 9100 -secret mycluster Configuring a New Multisite Search Head ./splunk edit cluster-config -mode searchhead -master_uri https://x.x.x.x:8089 -site site2 -secret mycluster Assign one of the members as the captain and set a member list: ./splunk bootstrap shcluster-captain –servers_list https://SH2:8089,https://SH3:8089,https://SH4:8089
@rahulhari88  Check these documentations: https://docs.splunk.com/Documentation/Splunk/9.4.1/DistSearch/DeploymultisiteSHC  https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/Multisitearch... See more...
@rahulhari88  Check these documentations: https://docs.splunk.com/Documentation/Splunk/9.4.1/DistSearch/DeploymultisiteSHC  https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/Multisitearchitecture 
@rahulhari88  Key points site_replication_factor: controls how to distribute raw copies of data among the sites. site_search_factor: controls how to distribute searchable copies availabe_sites : ... See more...
@rahulhari88  Key points site_replication_factor: controls how to distribute raw copies of data among the sites. site_search_factor: controls how to distribute searchable copies availabe_sites : defines the site in cluster. site: A logical group that shares clustering policies, also the site where the master node resides. multisite: Enables multi site clustering
Ok. Let's get it out into the open - rolling restart is one of the basic concepts of managing Splunk environments, multisite cluster is one of the more complicated setups. You're trying to manage the... See more...
Ok. Let's get it out into the open - rolling restart is one of the basic concepts of managing Splunk environments, multisite cluster is one of the more complicated setups. You're trying to manage the latter without understaing the former. I understand that you might simply have inherited an environment and "someone has to" take care of it but I'd urge you to take some proactive steps to learn how to do stuff properly. Not just as asking case by case for help, but getting some more organized training. Having said that - rolling restart is a mechanism with which you automatically restart clustered indexers or search heads but not all at once. That's why it's called rolling. The process is managed by either CM in case of indexers or captain in case of search heads and only part of your components is restarted at the same time until the whole layer is restarted. How big this part is is configurable (if I remember correctly it's 20% by default meaning that at any given time 20% of your indexers can be down due to the ongoing rolling restart). On its own it doesn't cause any configuration changes (although it's typically triggered by them) so no "token loss" whatever that could mean. I can imagine some badly engineered environments where rolling restart would cause interruptions in data ingestion but in a well designed setup it shouldn't. There are two most important effects of rolling restarts of indexers. 1. Your data availability during restart is reduced so the searches spawned during that time might return wrong/incomplete results. 2. Since Splunk limits some replication-related activities during rolling restart you have reduced resiliency in that time and your environment is more prone to data lose should anything like server crash happen during this process. Additionally, roling restart as any restart if done too often can create many small buckets.
@rahulhari88  Multisite clusters differ from single-site clusters in these key respects: Each node (master/peer/search head) has an assigned site. Replication of bucket copies occurs with site-aw... See more...
@rahulhari88  Multisite clusters differ from single-site clusters in these key respects: Each node (master/peer/search head) has an assigned site. Replication of bucket copies occurs with site-awareness. Search heads distribute their searches across local peers only, when possible. Bucket-fixing activities respect site boundaries when applicable Multisite and single-site nodes share these characteristics: 1. Clusters have three types of nodes: master, peers, and search heads. 2. Each cluster has exactly one master node. 3. The cluster can have any number of peer nodes and search heads. Multisite nodes differ in these ways: Every node belongs to a specific site. Physical location typically determines a site. That is, if you want your cluster to span servers in Bangalore and Hyderabad, you assign all nodes in Bangalore  to site1 and all nodes in Hyderabad to site2. A typical multisite cluster has search heads on each site. This is necessary for search affinity, which increases search efficiency by allowing a search head to access all data locally.
@kiran_panchavat  : its a multi site environment not a single site . 
@rahulhari88  Perform post-deployment set-up Integrate the search head cluster with an indexer cluster (Single site) ./splunk edit cluster-config -mode searchhead -master_uri https://x.x.x.x:8089 ... See more...
@rahulhari88  Perform post-deployment set-up Integrate the search head cluster with an indexer cluster (Single site) ./splunk edit cluster-config -mode searchhead -master_uri https://x.x.x.x:8089 -secret <secretkey> ./splunk restart  
@kiran_panchavat : Thanks for answering the first question , can you also check and provide your inputs on the 2nd and 3rd question as well .
@rahulhari88  The replication_factor determines how many copies of search artifacts (e.g., knowledge objects, search results) are maintained across the search head cluster. Choose the replication f... See more...
@rahulhari88  The replication_factor determines how many copies of search artifacts (e.g., knowledge objects, search results) are maintained across the search head cluster. Choose the replication factor for the search head cluster - Splunk Documentation For example: If you have 3 search heads (SH1, SH2, SH3), set replication_factor to 3. This ensures that every search head maintains a copy of the artifacts. The command would look like: /opt/splunk/bin/splunk init shcluster-config -auth admin:<password> -mgmt_uri https://<SH1>:8089 -replication_port 9887 -replication_factor 3 -conf_deploy_fetch_url https://<deployer>:8089 -secret <security_key> -shcluster_label shc1 Captain: The captain is the search head that coordinates activities in the SHC, such as scheduling searches and replicating artifacts. Running this command on SH1 will designate SH1 as the initial captain. The captain role is dynamic and can move to another search head if the current captain fails, as long as the cluster has a (majority of nodes available).   
Hi  I have a following architecture and i am trying to setup my Search head cluster . i have multiple questions ,  if i want to have 1 copy of search artifact in each SH what should be my repl... See more...
Hi  I have a following architecture and i am trying to setup my Search head cluster . i have multiple questions ,  if i want to have 1 copy of search artifact in each SH what should be my replication factor here in this command  splunk init shcluster-config -auth <username>:<password> -mgmt_uri <URI>:<management_port> -replication_port <replication_port> -replication_factor <n> -conf_deploy_fetch_url <URL>:<management_port> -secret <security_key> -shcluster_label <label> Second question is how will i set up the captain in this cluster  If i run this command on the SH1 will it become captain  opt/splunk/bin/splunk bootstrap shcluster-captain -servers_list "https://splunk-essh01.abc.local:8089,https://splunk-essh02.abc.local:8089,https://splunk-essh03.abc.local:8089" Last question is if i want to connect this SHC to the indexer cluster , will this command work  splunk edit cluster-config -mode searchhead -site site0 -manager_uri https:// LB-IP-OR-DNS-HOSTNAME:8089   -replication_port 9887 -secret "<redacted>"    
Hello All, I have log file which has the following content in json format, I would like to parse the timestamp and convert it to "%m-%d-%Y %H:%M:%S.%3N" and assign it to the same field timestamp. C... See more...
Hello All, I have log file which has the following content in json format, I would like to parse the timestamp and convert it to "%m-%d-%Y %H:%M:%S.%3N" and assign it to the same field timestamp. Can someone assist me on this on what should be props.conf and transforms.conf. i tried to use _json sourcetype but it producing none for the timestamp field. Note: I'm trying to test this locally. ``` {"level":"warn","service":"resource-sweeper","timestamp":1744302465965,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744302475969,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744302858869,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744304731808,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744304774636,"message":"1 nodes are not allocated"} ```  
As title says, I'm having trouble to establish a connection with my Openshift namespace. Whenever I enter the details and hit Save and Test, an error pops up: Setup Failed An exception was throw... See more...
As title says, I'm having trouble to establish a connection with my Openshift namespace. Whenever I enter the details and hit Save and Test, an error pops up: Setup Failed An exception was thrown while dispatching the python script handler. .   I've been searching the python logs and it seems to be related to OpenSSL: grep -B 5 -A 5 "mltk" .../var/log/splunk/python.log -> ERROR You are linking against OpenSSL 1.0.2, which is no longer supported by the OpenSSL project. To use this version of cryptography you need to upgrade to a newer version of OpenSSL. For this version only you can also set the environment variable CRYPTOGRAPHY_ALLOW_OPENSSL_102 to allow OpenSSL 1.0.2. As the error suggests, I tried to set a variable via command line, as well as through /splunk/etc/splunk-launch.conf but without success. Has anyone had this error before and knows how to solve?  
@isoutamo @PickleRick @what could be the consequences for HEC data if indexers get rolling restarts everytime? Loss of data? Loss of token? Please explain
We are collecting the sourtype of the data we are currently receiving by changing it as follows. [A_syslog] TRANSFORMS-<class_A> = <TRANSFORMS_STANZA_NAME> [<TRANSFORMS_STANZA_NAME>] REGEX = \w+\... See more...
We are collecting the sourtype of the data we are currently receiving by changing it as follows. [A_syslog] TRANSFORMS-<class_A> = <TRANSFORMS_STANZA_NAME> [<TRANSFORMS_STANZA_NAME>] REGEX = \w+\s+\d+\s+\d([^\s+]*)\s+([^\s+]*)\s+([^\s+]*)\s+([^\s+]*)\s+([^\s+]*)\s+ DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::B_syslog WRITE_META = true I want to apply timestamp for B_syslog differently here, so I'm looking for sourcetype in props.conf but I can't see it. When I change the sourcetype in the same way as above, can I get a different timestamp value only for that data?
Hi Isoutamo, Thank you for the quick response. I'll take a look at the link provided for troubleshooting. In the meantime to answer your questions... We are using DB connect 3.18.2 which is the ne... See more...
Hi Isoutamo, Thank you for the quick response. I'll take a look at the link provided for troubleshooting. In the meantime to answer your questions... We are using DB connect 3.18.2 which is the newest version. We do have the updated JRE version that is compatible installed (JDK 17).  I've confirmed Java is indeed working by running the java version command on the server. It indeed comes back with the installed version so the environmental variable is confirmed working and correct. Also, the DB connect page would not work correctly if Java is not installed. I am able to navigate to the page so we can probably rule Java being an issue out. Will post a solution/update once I go through that troubleshooting page. Kind Regards,
Hello Livehybrid, Thank you for the quick response. We did indeed try Windows Authentication. We have confirmed the permission and password is correct as we attempted to login with the account. Acce... See more...
Hello Livehybrid, Thank you for the quick response. We did indeed try Windows Authentication. We have confirmed the permission and password is correct as we attempted to login with the account. Access rights are also correct as we were able to navigate the necessary areas of the database. We also tried the SQL authentication. We even made a direct SQL account when we attempted SQL authentication (non windows auth). No luck there either. I have failed to mention in my original post the DB connect version. We have 3.18.2 installed. We also have all the necessary JDBC drivers installed. If there are any other data points I can provide to assist with getting some help on this please let me know. Kind Regards,