All Posts

Top

All Posts

The old events cannot be searched because they're on the old volume.  Indexers have only one volume definition so they only know the current volume. Use OS tools to copy the directories from the old... See more...
The old events cannot be searched because they're on the old volume.  Indexers have only one volume definition so they only know the current volume. Use OS tools to copy the directories from the old volume to the new one then restart the indexers.
Hi @Paul.Gilbody , Can you share the solution here. I'm stuck with same issue
App  started successfully (id: 1712665900147) on asset: Loaded action execution configuration executing action: test_asset_connectivity Connecting to 192.168.208.144... Connectivity test faile... See more...
App  started successfully (id: 1712665900147) on asset: Loaded action execution configuration executing action: test_asset_connectivity Connecting to 192.168.208.144... Connectivity test failed 1 action failed Failed to connect to PHANTOM server. No route to host. Connectivity test failed i am facing this issue  i tried all the possible way
I would look at this, but unfortunately playbooks that were developed in 6.x will not load in 5.x
I have the same issue but these arguments are not set in the code? Same issue as OP is writing about. The table is shown if i select the classic dashboard, but not in studio..
Hi all, I created a volume and changed all homePath for all indexes to use this volume. Now I can't search on events that existed before this volume was created, and the search heads only show even... See more...
Hi all, I created a volume and changed all homePath for all indexes to use this volume. Now I can't search on events that existed before this volume was created, and the search heads only show events that are on this volume. How can I move old and existing events to this volume so I can search on them? Thank you.
Found my answer here Customize Incident Review in Splunk Enterprise Security - Splunk Documentation
Hello guys, so I'm currently trying to set Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm cha... See more...
Hello guys, so I'm currently trying to set Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm chart, so while trying to change the initial admin credentials on all the instances, I face the following issue where all instance will be up and ready as Kubernetes pods for except the indexers where they will not start and remain in an error phase without any logs indicating the reason for this, so the following is a snippet of my values.yaml file which is being provided for the Splunk Enterprise chart:   sva: c3: enabled: true indexerClusters: - name: idx searchHeadClusters: - name: shc indexerCluster: enabled: true name: "idx" replicaCount: 3 defaults: splunk: hec_disabled: 0 hec_enableSSL: 0 hec_token: "test" password: "admintest" pass4SymmKey: "test" idxc: secret: "test" shc: secret: "test" extraEnv: - name: SPLUNK_DEFAULTS_URL value: "/mnt/splunk-defaults/default.yml"   Initially, I was not passing the "SPLUNK_DEFAULTS_URL", but after some debugging the "defaults" field will write in "/mnt/splunk-defaults/default.yml" only, and by default, all instances read from "/mnt/splunk-secrets/default.yml" so I had to change it, and so what happened admin password had changed on all Splunk instances to "admintest" but the issue is indexers pods would not start. Note: I tried to change the password by providing the "SPLUNK_PASSWORD" environment variable to all instances but the same behavior.
Yes. The [tcpout] defaultGroup setting tells your Splunk component what to do with events by default. So if you don't modify the _TCP_ROUTING field, your events should be going to the my_indexers gro... See more...
Yes. The [tcpout] defaultGroup setting tells your Splunk component what to do with events by default. So if you don't modify the _TCP_ROUTING field, your events should be going to the my_indexers group. But when you overwrite the _TCP_ROUTING with just distant_HF_formylogs, you'll be sending to that group only.
Ok I understand what you say. But sorry I forgot to mentionned that I have a TCPOUT default on my conf: [tcpout] defaultGroup = my_indexers forceTimebasedAutoLB = true forwardedindex.filter.disable... See more...
Ok I understand what you say. But sorry I forgot to mentionned that I have a TCPOUT default on my conf: [tcpout] defaultGroup = my_indexers forceTimebasedAutoLB = true forwardedindex.filter.disable = true [tcpout:my_indexers] server = indexer1:9997, indexer2:9997 So if I'am correct, the inputs.conf : [inputs.conf] [udp://22210] index = my_logs_indexer sourcetype = log_sourcetype disabled = false Redirect the logs to the default outputs, because no outputs is specified . Correct me if I'm wrong, and sorry to forgot this config at the first question
this is the result:   I would expect a LOG field to be created for each event with the different values of its log1, log2, or logn.   Regular expression works (tested on 101), and other_trans... See more...
this is the result:   I would expect a LOG field to be created for each event with the different values of its log1, log2, or logn.   Regular expression works (tested on 101), and other_transforms_stanza does not apply to this field.
Correct.  I had a different user. Created an admin one and the error went away.
Yes. I understand. They are not "cloned", they are redirected. The events are sent to _all_ output groups specified in the outputs.conf (or to the specified output group(s), if you manipulated _TCP_... See more...
Yes. I understand. They are not "cloned", they are redirected. The events are sent to _all_ output groups specified in the outputs.conf (or to the specified output group(s), if you manipulated _TCP_ROUTING manually). Within each of applicable group the event is sent to just one of the servers configured in such group. So you must make sure that the events you want have both output groups specified in _TCP_ROUTING.
Hi all, Since the redesign of the new Incident Review page, we appear to have lost the ability to search for Notables using a ShortID. With the old dashboard this was achieved by selecting Associatio... See more...
Hi all, Since the redesign of the new Incident Review page, we appear to have lost the ability to search for Notables using a ShortID. With the old dashboard this was achieved by selecting Associations from the filters and entering the ShortID you were looking for, but the new Incident Review dashboard appears to have taken this functionality away. Is there any way to achieve this?
Hi PickleRick, thanks for your response and time. The cloned logs are routing only to one instance, specified into the outputs.conf The "original" logs, not the cloned one are directed to my lo... See more...
Hi PickleRick, thanks for your response and time. The cloned logs are routing only to one instance, specified into the outputs.conf The "original" logs, not the cloned one are directed to my local indexers, just the cloned sourcetype is directed with another heavy forwarder specified in the outputs.conf placed in the same app as the props and transforms. Not sure if i'm clear
The "convert mktime()" could also be the way to go but you need to specify the time format with... the "timeformat=" option. Otherwise Splunk has to guess and usually guesses wrong.
Please, don't dig out old threads. Let them rest in peace But seriously, to gain more visibility, you should just make a new thread, possibly linking to any informations you already found for ref... See more...
Please, don't dig out old threads. Let them rest in peace But seriously, to gain more visibility, you should just make a new thread, possibly linking to any informations you already found for reference. But to the point - if all else fails, you can always create your own script using Selenium and emulate a user clicking through your Sharepoint share and downloading the files but it's a very very ugly idea.
You already had some sugestions which are OK but the question is what are your limitations on this search. How many events do you expect from each of those data sets, how long is the search supposed ... See more...
You already had some sugestions which are OK but the question is what are your limitations on this search. How many events do you expect from each of those data sets, how long is the search supposed to take - these can warrant a different approach to the problem. For example, since you're dealing with email data, it's a relatively valid question why aren't you using CIM datamodel (and have it accelerated).
10k results, not 50k. The 50k results limit is for join command. "Normal" subsearch has a default 10k results limit. (yes, all those limits can be confusing and are easy to mistake with one another).