Activity Feed
- Posted Re: How to identify a skipped scheduled accelerated report ? on Splunk Search. 09-30-2024 06:21 AM
- Karma Re: How to identify a skipped scheduled accelerated report ? for isoutamo. 09-30-2024 06:21 AM
- Posted Re: How to identify a skipped scheduled accelerated report ? on Splunk Search. 09-24-2024 09:36 AM
- Posted How to identify a skipped scheduled accelerated report ? on Splunk Search. 09-24-2024 08:22 AM
- Posted Re: PART2 How to migrate a distributed, clustered Splunk (9.0.4.1) deployment from OS RHEL 7 to new servers RHEL 9? on Splunk Enterprise. 05-22-2024 05:07 AM
- Posted Re: PART2 How to migrate a distributed, clustered Splunk (9.0.4.1) deployment from OS RHEL 7 to new servers RHEL 9? on Splunk Enterprise. 05-15-2024 06:21 AM
- Posted Re: How to resolve index buckets stuck in "Fixup Tasks - In Progress"? on Splunk Enterprise. 05-03-2024 04:07 PM
- Posted How to resolve index buckets stuck in "Fixup Tasks - In Progress"? on Splunk Enterprise. 05-03-2024 04:03 PM
- Posted Re: How to migrate a distributed, clustered Splunk (9.0.4.1) deployment from OS RHEL 7 to new servers RHEL 9? on Splunk Enterprise. 04-02-2024 07:10 AM
- Posted PART2 How to migrate a distributed, clustered Splunk (9.0.4.1) deployment from OS RHEL 7 to new servers RHEL 9? on Splunk Enterprise. 03-01-2024 10:17 AM
- Karma Re: How to migrate a distributed, clustered Splunk (9.0.4.1) deployment from OS RHEL 7 to new servers RHEL 9? for isoutamo. 03-01-2024 07:23 AM
- Posted Re: How to migrate a distributed, clustered Splunk (9.0.4.1) deployment from OS RHEL 7 to new servers RHEL 9? on Splunk Enterprise. 03-01-2024 07:21 AM
- Posted Re: How to migrate a distributed, clustered Splunk (9.0.4.1) deployment from OS RHEL 7 to new servers RHEL 9? on Splunk Enterprise. 03-01-2024 07:06 AM
- Posted How to migrate a distributed, clustered Splunk (9.0.4.1) deployment from OS RHEL 7 to new servers RHEL 9? on Splunk Enterprise. 02-29-2024 12:17 PM
- Tagged How to migrate a distributed, clustered Splunk (9.0.4.1) deployment from OS RHEL 7 to new servers RHEL 9? on Splunk Enterprise. 02-29-2024 12:17 PM
- Karma Re: How can I determine the total amount of data received and indexed by UFs? for richgalloway. 11-29-2023 05:34 AM
- Posted Re: How can I determine the total amount of data received and indexed by UFs? on Getting Data In. 11-29-2023 05:33 AM
- Karma Re: How can I determine the total amount of data received and indexed by UFs? for richgalloway. 11-29-2023 05:33 AM
- Karma Re: How can I determine the total amount of data received and indexed by UFs? for richgalloway. 11-28-2023 12:54 PM
- Posted Re: How can I determine the total amount of data received and indexed by UFs? on Getting Data In. 11-28-2023 12:48 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
09-30-2024
06:21 AM
@isoutamo Yes you are correct. The acceleration detail has an Summary Id , which does correspond to the savedsearch_name _ACCELERATE_<redacted>_search_nobody_<Summary Id>_ACCELERATE_ This confirms the issue is the License Usage Data Cube cube report/acceleration. I will need to adjust the search resources to prevent the skipping. Thank you!!!
... View more
09-24-2024
09:36 AM
Thank you, I am aware of that modal in MC but it gives me the same arcane name for example >>> _ACCELERATE_111111-22222-333-4444-123456789_search_nobody_123456978_ACCELERATE_" However, the origin host is my dedicated MC splunk server and there is only 1 accelerate report icon listed for >License Usage Data Cube, so I assume that is the culprit. But why is it skipping? I clicked the accelerate option, perhaps I need to adjust the max scheduled searches? Yes I found a number of garbage scheduled reports from years ago eating up resources and starving the accelerated report for the License Usage Data Cube. I incorrectly assumed that report would have priority to resources. Thank you for your help.
... View more
09-24-2024
08:22 AM
I have noticed that a saved search is chronically skipped, almost 100% but I cannot trace it back to the origin. The search name is >>> _ACCELERATE_<redacted>_search_nobody_<redacted>_ACCELERATE_ From _internal its in search app, report acceleration, and user nobody. _Audit provides no clues either. How do I trace this to the source? Thank you
... View more
Labels
- Labels:
-
Other
05-22-2024
05:07 AM
I think all the suggestions that spawned from my original question are worth considering in the context of the individual environments a user might have. For instance, I was not able to spin up a duplicate SHC or IDXC and thus chose a different option. However, I suggest that new threads be created for each solution, with a detailed explanation. This thread is getting a bit long.
... View more
05-15-2024
06:21 AM
@iankitkumar Yes I did get it to work. But depends on how the disks and mount points are setup on the existing host. If the OS disk and Splunk disk are separate with distinct Volume Groups then a disk swap should work. For example, on a linux host where sda is the OS disk and sdb is the Splunk disk with mountpoint /opt/splunk. Here are the steps. Note: I install with .tgz files not rpm. Of course test in your dev environment 1st!! 1 install same version of Splunk on the new host with new Linux OS version (I use same username/pwd that I used to install Splunk on the existing host) 2 enable boot-start with systemd managed and verify install runs correctly 3 shut down the old and new hosts 4 swap the disk 5 rename/reIP and open required firewalld ports (if needed for your environment) to verify its working. IF you cannot do a disk swap (because the host is not configured to do so) you can always tar up /opt/splunk and untar over /opt/splunk on the new host (on top of the fresh install of splunk (same version)). Documentation describes this method >>> Migrate a Splunk Enterprise instance from one physical machine to another - Splunk Documentation Your Splunk Enterprise installation is on an operating system that either your organization or Splunk no longer supports, and you want to move it to an operating system that does have support. Good Luck!
... View more
05-03-2024
04:07 PM
The actual steps are: 1 find the corrupted bucket location with the dbinspect query 2 enable maintenance-mode on the IDXCM 3 take the indexer offline (where you want to repair the bucket) 4 run the fsck repair command on the stopped indexer 5 start the indexer when finished 6 disable maintenance-mode on the IDXCM 7 let the IDXCluster heal 8 repeat steps for the next bucket Large buckets 10G take about 25min to repair Goodluck
... View more
05-03-2024
04:03 PM
The other day a few alerts surfaced showing I had 6 large windows data buckets stuck "Fixup Task - In Progress". I ran a query | dbinspect index=windows corruptonly=true
| search bucketId IN (windows~nnnn~guid,...)
| fields bucketId, path, splunk_server, corruptReason, state and found all the primary db_<buckets> from the alerts were corrupt. You can also see it on the IDXCM bucket status. I tried a few fsck repairs commands on the indexers where the primary buckets resided, but it failed due to error >>> failReason=No bloomfilter then I tried >>> ./splunk fsck repair --one-bucket --bucket-path=/<path> --index-name=<indexName> --debug --v --backfill-never After that it cleared and splunkd.log showed >>> Successfully released lock for bucket with path... I hope this information helps.
... View more
Labels
- Labels:
-
troubleshooting
04-02-2024
07:10 AM
@siemless This may be best to discuss in the Slack Users but I could not find you so I will respond here. In some cases, I have boxes where the /opt/splunk dir is mounted to a separate drive w/ mount point /opt/splunk. In that case you can just swap the disk, I learned this method from AWS support. But that takes preplanning. In some cases, I have boxes that are jacked up, either volume issues across multiple disks or just not setup to swap disks. In that case you can use the Splunk docs >>> https://docs.splunk.com/Documentation/Splunk/9.2.1/Installation/MigrateaSplunkinstance I argued with Splunk about the documentation steps but they claim the steps are correct, although I still believe confusing. FWIW this is what I did... 1 >Create a new host with new OS (in my case I rename /re-IP to the original afterward). 2 > Install the same version of Splunk on new host (I used a .tar), set systemd, set same admin pwd, then stop Splunkd, maybe test a restart and reboot, to verify. 3 > Stop Splunkd on old host, tar up /opt/splunk, copy over the old.tar to new box, untar over the new install, then start Splunkd. That worked for me, and going fwd all new hosts will be configured for the disk-swappable process. Good luck
... View more
03-01-2024
10:17 AM
In some cases, I have Splunk installed on server builds where /opt/splunk is a mount point and separate disk/LVM. That is /opt/splunk dir is on a different disk than the OS. Would anyone know if there are any problems, detaching the LVM from a RHEL 7 box and then attaching the LVM to a RHEL 9 box? Thank you
... View more
Labels
- Labels:
-
configuration
-
installation
-
upgrade
03-01-2024
07:21 AM
Ok thank you for the clarity... I think someone should revise those steps then, its ambiguous.
... View more
03-01-2024
07:06 AM
Thank you for the reply. RE: "swap" method, yeah I thought about that and also share your apprehension. RE: " deploy new component", yeah I agree with that method for idxc peers and shc members... But check this out... please LMK what you think Per Splunk docs >>> docs.splunk.com/Documentation/Splunk/9.2.0/Installation/MigrateaSplunkinstance "Migrate a Splunk Enterprise instance from one physical machine to another" "When to migrate" "Your Splunk Enterprise installation is on an operating system that either your organization or Splunk no longer supports, and you want to move it to an operating system that does have support." "How to migrate" The Steps say >>> Stop Splunk Enterprise services on the host from which you want to migrate. Copy the entire contents of the $SPLUNK_HOME directory from the old host to the new host. Copying this directory also copies the mongo subdirectory. Install Splunk Enterprise on the new host. The way I read this is... 1) Stop Splunk on the old box 2) tar up the /opt/splunk on the old box e.g. > tar -cjvf $(hostname)_splunk-og.tar.bz2 --exclude=./var/* --exclude=./$(hostname)*bz2 ./ 3) move and untar the .bz2 file on the new box in /opt e.g. > tar -xjvf <hostname>_splunk-og.tar.bz2 -C /opt/splunk 4 ) install a clean copy (downloaded from Splunk) of the same version of Splunk on top of the old copies Apparently someone that documented this believes this is the way to go... What do you think? RE: my --exclude=./var/* that is for boxes that don't contain indexed data RE: my --exclude=./$(hostname)*bz2./ this is because I am running the tar from /opt/splunk dir Thank you
... View more
02-29-2024
12:17 PM
I have a distributed deployment at version 9.0.4.1 Everything in running on RHEL 7 and the system/server team does not want to do in place upgrades to RHEL 9. I have been tasked to migrate each node to a new replacement server (which will be renamed / IP- addressed to match the existing). From what I have read this is possible, but I have a few questions. Lets consider I start with standalone nodes, like a SHC-deployer, Monitoring Console, License Manager... These are the general steps I have gathered 1 Install Splunk (same version) on the new server 2 Stop Splunk on the old server 3 Copy old configs to new server ?? <<< which configs? is there a check list documented somewhere 4 Start new Splunk server and verify I could go thru each directory copying configs, but any advice to expedite this step is appreciated. Thank you
... View more
Labels
- Labels:
-
configuration
-
installation
-
Other
-
upgrade
11-29-2023
05:33 AM
The numbers are not exact, from the DS Forwarder Management > 1275, dc(h) from metrics > 1287, and the total stats count from the final query > 1166 so its not accurate. I will need to create a lookup of UFs. Thank you for your support.
... View more
11-28-2023
12:48 PM
index=_internal source=*license_usage.log earliest=-1d@d latest=now [search index=_internal source=*metrics.log fwdType=uf earliest=-1d@d latest=now | rename hostname as h | fields h]
| stats sum(b) as total_usage_bytes by h
| eval total_usage_gb = round(total_usage_bytes/1024/1024/1024, 2)
| fields - total_usage_bytes
| addcoltotals label="Total" labelfield="h" total_usage_gb I think this is what I wanted, unless someone thinks its inaccurate? Please advise. TY
... View more
11-28-2023
12:27 PM
Thank you for the reply. I also looked at this log but it requires curating an exact list of the UFs, bc I have some pollution, e.g. h= HFs, SC4S, etc. The license_usage log may be the best route if I can put together a lookup of just UFs.
... View more
11-28-2023
11:56 AM
Thank you for your reply, do you have a method of querying to get an answer for my question? I am not finding the key logs containing UF data thruput or ingest information.
... View more
11-28-2023
11:11 AM
Hi I am working on a query to determine the hourly (or daily) totals of all indexed data (in GBs) coming from UFs. In our deployment, UFs send directly to the Indexer Cluster. The issue I am having w/ the following query, is that the volume is not realistic, and I am probably misunderstanding the _internal metrics log. Perhaps the kb field is not the correct field to sum as data thruput? index=_internal source=*metrics.log group=tcpin_connections fwdType=uf
| eval GB = kb/(1024*1024)
| stats sum(GB) as GB Any advice appreciated. Thank you
... View more
Labels
- Labels:
-
data
-
indexer
-
universal forwarder
07-17-2023
09:11 AM
yes thx I accepted that over a year ago
... View more
05-11-2023
01:06 PM
Hi Does anyone know how to fix the Proofpoint TAP WebUI for inputs and configuration? When I launched a clean install of the app on a clean install of 9.0.4.1, the inputs and configuration pages are blank. However, when I edit the url to /search it loads properly. Anyone know how to fix this? Thank you
... View more
Labels
04-27-2023
10:17 AM
Hi, I have not received any response from Cisco directly on this topic so I thought I would try here. I am cleaning up a messy syslog pipeline containing all sorts of devices, including Cisco. I want to throw everything Cisco in 1 index. But I am not sure Cisco syslog formats are same across all iOS devices: switches, routers, etc. I would assume it would be or very compatible, syslog/CEF... Can anyone confirm or speak to this question? Ideally I will move to using SC4S but in the meantime I want to cleanup the existing and use available TAs to parse/format the data. Any advice appreciated. Thank you
... View more
Labels
- Labels:
-
configuration
-
Other
04-10-2023
06:56 AM
Hi, I am forced to set individual TZ for individual hosts in a SeverClass because the hosts' OS time is not standardized. I have noticed TZ = US/Eastern, TZ = US/Central, and TZ = US/Pacific, all account for Daylight Savings Time automatically. However, I have servers in the following Time Zones and I am hoping someone can confirm what TZ settings I should use to automatically adjust for DST. AUS/Eastern <<< using TZ=Australia/Sydney AWST <<< using TZ=Australia/West Etc/GMT+12 <<<< cannot find alternate GB (for UK BST) <<<< using TZ=GB (for UK locations w/ BST) HKT <<<< cannot find alternate Hopefully that is correct... I was given these by the host admin. Please refer me to doc, as I don't find these TZs in Splunk docs, other than a ref to wikipedia. Thank you
... View more
- Tags:
- tz
Labels
- Labels:
-
props.conf
-
time
03-08-2023
09:28 AM
POA 1) disable KVstore on the indexer instances only 2) update KVstore all other Splunk instances with a "ready" enabled KVstore to be safe. **In my case, all instances are default enabled ** Hopefully everything goes well! I will post if it goes sideways.
... View more
03-08-2023
08:43 AM
Well, in line with your recommendations... I am being advised by Internal Splunk Personnel, to upgrade everything except indexers... My feeling is to disable the kvstore on the indexers but not the other nodes like IDXcluster master etc... Would you agree? (you can conclude that I value your opinion with equal weight as support 😉, maybe more)
... View more
03-08-2023
06:35 AM
Thanks for the clarification (and persevering thru my questions). I really do appreciate it! My concerns, re: IDXCluster and KVstore, are due to all the non-sensical configurations the previous admins left for me.... It would not surprise me to see something unusual here, as I have seen some really goofy stuff. For instance , I am concerned they did some sort of remote kvstore replication/sync to the indexer peers etc... maybe that doesn't matter, not 100% about that. Would you advise that I "disable" the KVstore on the indexers? for safety? per Splunk docs>>> https://docs.splunk.com/Documentation/Splunk/8.1.3/Admin/AboutKVstore RE: "upgrade only Search Heads and Heavy Forwarders" , I see collections (probably default) on my MC/LM instance and Deployment Server instance as well. So I am thinking upgrade those as well? IDK. when I run this on my MC/LM instance >>> | rest splunk_server="local" "/services/search/distributed/peers/"
| search disabled=0
| rename peerName AS splunk_server
| fields host splunk_server
| join type=left splunk_server
[| rest splunk_server="*" "/services/kvstore/status"
| table splunk_server current.status current.replicationStatus current.storageEngine ]
| sort - current.storageEngine I see every instance (including indexers) has a default enabled kvstore in "ready"... So now, knowing this information, does that change your advice? RE: support, IDK, I do feel support should be able to answer these questions. I get support is "break/fix" but do I really have to break it first to get help or a couple definitive answers? I am sure Support will kick it to our Sales Rep and they will say we need to engage PS for 2 week$ 🤣... Thank you!
... View more