Activity Feed
- Posted Re: Routing and Forwarding from HWF to two indexers with different filtering on Getting Data In. 11-27-2024 06:44 AM
- Posted Routing and Forwarding from HWF to two indexers with different filtering on Getting Data In. 11-27-2024 02:39 AM
- Posted Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) on Deployment Architecture. 09-30-2024 03:10 AM
- Posted Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) on Deployment Architecture. 09-30-2024 03:07 AM
- Posted Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) on Deployment Architecture. 09-30-2024 02:51 AM
- Posted Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) on Deployment Architecture. 09-30-2024 01:26 AM
- Karma Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) for isoutamo. 09-30-2024 01:26 AM
- Posted Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) on Deployment Architecture. 09-25-2024 04:36 AM
- Karma Re: Migration from Standalone to indexer cluster (2 indexers + 1 SH) for PickleRick. 09-25-2024 04:36 AM
- Posted Migration from Standalone to indexer cluster (2 indexers + 1 SH) on Deployment Architecture. 09-25-2024 01:40 AM
- Posted Re: Ingestion issue from syslog-ng on All Apps and Add-ons. 06-02-2024 03:56 AM
- Posted Ingestion issue from syslog-ng on All Apps and Add-ons. 05-29-2024 12:00 PM
- Posted Re: How to add multiple field in a single search on Splunk Search. 05-23-2024 05:20 AM
- Got Karma for How to disable TLS1.0 and TLS1.1 for webhook port?. 04-02-2024 11:11 AM
- Posted Re: Running splunk on Rocky Linux distro on Installation. 03-06-2024 07:21 AM
- Posted Re: Moment.js not available in splunk 9.1.x+ on Splunk Enterprise. 02-23-2024 08:52 AM
- Posted Re: Microsoft Teams Add-on for Splunk: handling of 404 error on All Apps and Add-ons. 01-12-2024 03:09 AM
- Posted Re: Moment.js not available in splunk 9.1.x+ on Splunk Enterprise. 12-14-2023 06:20 AM
- Posted Re: Why is moment.js not available in splunk 9.1.x+? on Splunk Enterprise. 11-06-2023 09:46 AM
- Posted Re: Why is moment.js not available in splunk 9.1.x+? on Splunk Enterprise. 11-06-2023 08:22 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
1 | |||
0 | |||
0 | |||
2 |
03-20-2023
06:21 AM
Hello Splunkers,
I would like to have to set an alert if a sudden high amount of events are received.
I have this base search:
index=_internal source="*metrics.log" eps "group=per_source_thruput" NOT filetracker | eval events=eps*kb/kbps | timechart fixedrange=t span=1m limit=5 sum(events) by series
So I have the number of events by a source per minute. I like to trigger an alert if there are more than X events in 5 consecutive minutes from one source.
Thanks for your hints in advance
... View more
03-14-2023
03:21 AM
Yes, seems that solves this issue. One note: should remove the .orig manifest file before restarting, or it will be applied. Thanks!
... View more
03-08-2023
01:02 PM
index="windows_events" Eventtype=4697 OR Eventtype=4698 |stats count by Hostname
... View more
02-13-2023
03:19 AM
Hello,
The Splunk add-on builder won't load, it has the header, but the rest is blank. (9.0.3 enterprise + 4.1.1 app)
- Reinstall not helps
-Nothing in splunkd.log, add-on builder logs
- I found this event in web_service.log, but I don't know what should I do with it:
File "/opt/splunk/etc/apps/splunk_app_addon-builder/bin/splunk_app_add_on_builder/solnlib/utils.py", line 169, in extract_http_scheme_host_port raise ValueError(http_url + " is not in http(s)://hostname:port format") ValueError: splunk."mydomain"."ext" is not in http(s)://hostname:port format
note: I changed the real hostname.
Everything else is ok, all other apps are works fine. I reach the server on https://splunk."mydomain"."ext":8000 . splunk."mydomain"."ext" format set in server. conf as serverName and in web. conf for mgmtHostPort (with :"port")
Any ideas?
Thanks in advance,
... View more
Labels
- Labels:
-
configuration
-
troubleshooting
12-13-2022
10:29 AM
1 Karma
Hello,
I'm using the MS Teams add-on to collect call records. (https://splunkbase.splunk.com/app/4994)
The webhook port that opened for the collect offers TLS 1.0 and TLS 1.1. How can I disable it?
The server.conf and web.conf already contain the
sslVersions=tls1.2
stanza, and all other ports are not offered the deprecated protocols...
... View more
- Tags:
- tls
Labels
- Labels:
-
SSL
12-06-2022
02:05 AM
1 Karma
Figured out! Thanks for the tip! index=logons action=failure | stats dc(action) as failures by username | where failures > 20 | map maxsearches=50 search=" search index=users user=\"$username$\" | appendpipe [ stats count | eval user=\"$username$\" | where count=0 | fields - count ] | spath memberOf{}.displayName output=groupName | eval username=\"$username$\", failures=\"$failures$\" | lookup support.csv group as groupName output support" | eval support = if(isnull(support) OR support="", "central@example.com", support) | table username, failures, support
... View more
12-06-2022
12:58 AM
Maybe I was misunderstood. The groupName part is fine (there are many values). There is a problem if Username does not exist. If the index=users search returns with no result... In this case, how I can tell Splunk: "Okay, forget the map part, take the base search result and add central@example.com as value to support field"
... View more
12-05-2022
07:12 AM
Hi Splunkers,
I use many alerts where the result contains the username. Then a map search looks for this user, in the user list index, checks the group memberships, and will send the alert to the corresponding IT department. (there are many countries and there is a lookup that tells the support email by the user's country group). If the user is not a member of any country-group support email eval to the central one.
That is working fine... until the user is in the user's index. If the user cannot be found there, the whole search is not working.
Example:
index=logons action=failure | stats dc(action) as failures by username | where failures > 20 | map maxsearches=50 search=" search index=users user=\"$username$\" | spath memberOf{}.displayName output=groupName | eval username=\"$usernam$\", failures=\"$failures$\" | lookup support.csv group as groupName output support" | eval support = if(isnull(support) OR support="", "central@example.com", support) | table username, failures, support
So if a user failed to log in more than 20 times the alert triggers and sends an email to support - assigned by the group membership, if there is no membership, it will send to central IT.
If the user cannot be found in index=users for some reason, the alert will not trigger at all. I would like it if the alert triggers and send to central@example.com (since a not existing user, has no group ...) with the username from the base search included.
... View more
11-12-2022
06:13 AM
Hi, Your process is correct, but the topic is not about this. You just describe how to use a custom/third-party SSL certificate for the web GUI, but cliVerifyServerName is different from that.
... View more
09-30-2022
02:34 AM
Hello, I have JSON source where one of the fields has an escape character in the field name. Well actually I cannot see that in highlighted mode or in the extracted field list, it appears as "Report Refresh Date", just in raw view: "\ufeffReport Refresh Date". Naturally, the searches are not working with what appears as the "Report Refresh Date". If I select the field it appears as " 'red dot' Report Refresh Date", the search work, but can not be saved - it's "forget" the red dot, as any copy/ paste attempt does out of the search bar. How can I rid of this? By adding FIELDALIAS-alias1 = "\ufeffReport Refresh Date" AS "Report_Refresh_Date" to props.conf would help?
... View more
Labels
- Labels:
-
JSON
-
props.conf
09-07-2022
02:15 AM
Hello, I have the same issue and the remediation you shared is correct. But... in my case the app runs like 2 days flawlessly, then the webhook fails > subscription fails > no callrecords anymore. Maybe somebody found a way to make this app stable. I played with the intervals, but no help. I will try to disable and enable the webhook and subscription input by a crone job from CLI, but that is so "homemade"... The app is installed on an HWF and it runs when it runs... I have no idea why go fail randomly... The bad part is that when it stops collecting the CR, it will be lost. No way to fetch the "historical" logs... Appreciate your advice...
... View more
08-30-2022
04:47 AM
Hi, The same thing happened to me. Did you find the solution? Delete/reenter the subscription input solves it, but this is not a long-term solution. If the call record feed stops the events will be lost in space - No way to fetch "historical" events...
... View more
07-19-2022
07:30 AM
Hi, No offense, but he first rule of Splunk, that /opt/splunk/etc/system/README/ /opt/splunk/etc/system/default folders and content should be not modified. This is should be done by the Splunk support in a new release. I understand that the do-it-yourself way faster, but in the future, you can have unexpected behavior.
... View more
07-13-2022
01:08 AM
1 Karma
I added the cliVerifyServerName = true stanza to [sslConfig] and the result is: ERROR: certificate validation: self signed certificate in certificate chain - endless flood and the server won't start at all. I using a wildcard SSL certificate issued by a CA, so there is no way to have self-signed chains - or it's checking with the default certificate maybe. Conclusion: I will wait for the fix from Splunk's side (as for the federated.conf issue or the Python upgrade readiness app issue)
... View more
06-24-2022
08:57 AM
2 Karma
Hi, Correct me if I'm wrong but "mode" and "needs_consent" value definitions are missing from .../system/README/federated.conf.example and federated.conf.spec. I think that causing the issue.
... View more
06-24-2022
03:34 AM
Hi, I got the exact same error after upgrading 8.2.6. splunk btool check --debug ... Checking: /opt/splunk/etc/system/default/federated.conf Invalid key in stanza [provider:splunk] in /opt/splunk/etc/system/default/federated.conf, line 20: mode (value: standard). Invalid key in stanza [general] in /opt/splunk/etc/system/default/federated.conf, line 23: needs_consent (value: true). ... splunk btool federated list --debug /opt/splunk/etc/system/default/federated.conf [default] /opt/splunk/etc/system/default/federated.conf [general] /opt/splunk/etc/system/default/federated.conf needs_consent = true /opt/splunk/etc/system/default/federated.conf [provider:splunk] /opt/splunk/etc/system/default/federated.conf appContext = search /opt/splunk/etc/system/default/federated.conf mode = standard /opt/splunk/etc/system/default/federated.conf type = splunk /opt/splunk/etc/system/default/federated.conf useFSHKnowledgeObjects = false any idea?
... View more
05-11-2022
12:32 AM
Hello, Thanks for the tip, but I think it's not helping me in this case. 1. All my instances have the same issue after update - So I have no "working" manifest file to replace... 2. in the current file the python_upgrade_readiness_app has 1000 rows. I'm not going to edit the hashes 🙂 3. I can delete the app and restore the original 1.0 - no more integrity issue, but Splunk would like to update the app, so the circle begins... I have the feeling, that this is something for Splunk Support. Seems to me the update "forgot" to record the new version into the manifest file. As a homemade solution, I can totally remove the app's records from the manifest, so nothing to check - no integrity warning. But I do not like this way.
... View more
05-04-2022
08:30 AM
2 Karma
Hello,
I recently upgraded Splunk Enterprise (and Heavy Forwarder) instances to 8.2.5 and 8.2.6. Both versions (maybe others too) install the Python Upgrade Readiness App 1.0 as default. Then Splunk asked to update the App to 3.1. Nicely done from Splunk, but after the restart, the Integrity check starts to complain about the missing files of 1.0 version. It is annoying. Is there a way to "teach" Splunk the new version? (I know the check could be completely turned off, but I won't like to lose the information if ever something important changes.)
... View more
11-20-2020
03:03 AM
Hello, I have a new index - it's a monster - eating up my disk space. Until I move it to the physical server I need to fix it. Well, I limited maxTotalDataSizeMB, seem working but the cold storage skipped landed in frozen directly, so I cannot search it. The hot/warm storage is "local" on VM, the cold, frozen, thawed is an S3. The optimal idea is 7 days in hot/warm (if over maxTotalDataSizeMB then faster) then go cold for 90 days (no size limit) then thawed for 1 year (no size limit). here is my current setting archiver.enableDataArchive = 0 /opt/splunk/etc/system/default/indexes.conf archiver.maxDataArchiveRetentionPeriod = 0 /opt/splunk/etc/system/default/indexes.conf assureUTF8 = false bucketRebuildMemoryHint = 0 coldPath = /mnt/archive_s3/SPLUNK_DB/indexname/colddb /opt/splunk/etc/system/default/indexes.conf coldPath.maxDataSizeMB = 0 coldToFrozenDir = /mnt/archive_s3/SPLUNK_DB/indexname/Frozenarchive /opt/splunk/etc/system/default/indexes.conf coldToFrozenScript = compressRawdata = 1 /opt/splunk/etc/system/default/indexes.conf datatype = event /opt/splunk/etc/system/default/indexes.conf defaultDatabase = main enableDataIntegrityControl = 0 enableOnlineBucketRepair = 1 /opt/splunk/etc/system/default/indexes.conf enableRealtimeSearch = true enableTsidxReduction = 0 frozenTimePeriodInSecs = 3024000 homePath = $SPLUNK_DB/indexname/db /opt/splunk/etc/system/default/indexes.conf homePath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf hotBucketTimeRefreshInterval = 10 /opt/splunk/etc/system/default/indexes.conf indexThreads = auto /opt/splunk/etc/system/default/indexes.conf journalCompression = gzip /opt/splunk/etc/system/default/indexes.conf maxBloomBackfillBucketAge = 30d /opt/splunk/etc/system/default/indexes.conf maxBucketSizeCacheEntries = 0 maxConcurrentOptimizes = 6 maxDataSize = auto_high_volume maxGlobalDataSizeMB = 0 maxHotBuckets = 10 maxHotIdleSecs = 86400 /opt/splunk/etc/system/default/indexes.conf maxHotSpanSecs = 7776000 maxMemMB = 20 /opt/splunk/etc/system/default/indexes.conf maxMetaEntries = 1000000 /opt/splunk/etc/system/default/indexes.conf maxRunningProcessGroups = 8 /opt/splunk/etc/system/default/indexes.conf maxRunningProcessGroupsLowPriority = 1 /opt/splunk/etc/system/default/indexes.conf maxTimeUnreplicatedNoAcks = 300 /opt/splunk/etc/system/default/indexes.conf maxTimeUnreplicatedWithAcks = 60 maxTotalDataSizeMB = 76800 maxWarmDBCount = 200 /opt/splunk/etc/system/default/indexes.conf memPoolMB = auto minHotIdleSecsBeforeForceRoll = 0 /opt/splunk/etc/system/default/indexes.conf minRawFileSyncSecs = disable /opt/splunk/etc/system/default/indexes.conf minStreamGroupQueueSize = 2000 /opt/splunk/etc/system/default/indexes.conf partialServiceMetaPeriod = 0 /opt/splunk/etc/system/default/indexes.conf processTrackerServiceInterval = 1 /opt/splunk/etc/system/default/indexes.conf quarantineFutureSecs = 2592000 /opt/splunk/etc/system/default/indexes.conf quarantinePastSecs = 77760000 /opt/splunk/etc/system/default/indexes.conf rawChunkSizeBytes = 131072 /opt/splunk/etc/system/default/indexes.conf repFactor = 0 rotatePeriodInSecs = 60 rtRouterQueueSize = rtRouterThreads = selfStorageThreads = /opt/splunk/etc/system/default/indexes.conf serviceInactiveIndexesPeriod = 60 /opt/splunk/etc/system/default/indexes.conf serviceMetaPeriod = 25 /opt/splunk/etc/system/default/indexes.conf serviceOnlyAsNeeded = true /opt/splunk/etc/system/default/indexes.conf serviceSubtaskTimingPeriod = 30 /opt/splunk/etc/system/default/indexes.conf splitByIndexKeys = /opt/splunk/etc/system/default/indexes.conf streamingTargetTsidxSyncPeriodMsec = 5000 /opt/splunk/etc/system/default/indexes.conf suppressBannerList = suspendHotRollByDeleteQuery = 0 /opt/splunk/etc/system/default/indexes.conf sync = 0 syncMeta = 1 thawedPath = /mnt/archive_s3/SPLUNK_DB/indexname/thaweddb /opt/splunk/etc/system/default/indexes.conf throttleCheckPeriod = 15 /opt/splunk/etc/system/default/indexes.conf timePeriodInSecBeforeTsidxReduction = 604800 /opt/splunk/etc/system/default/indexes.conf tsidxReductionCheckPeriodInSec = 600 tsidxWritingLevel = tstatsHomePath = volume:_splunk_summaries/$_index_name/datamodel_summary /opt/splunk/etc/system/default/indexes.conf warmToColdScript = I assume this is the issue coldPath.maxDataSizeMB = 0 why skip cold, but not sure. I appreciated if somebody could fix my settings.
... View more
Labels
- Labels:
-
index
11-06-2020
04:14 AM
Meanwhile, I found it 🙂 The Palo alto add-on permission was limited to the app, not Global. So if I search in Paloalto app it is ok, but that strange behavior in the default Search app. Only the "bonus" question left. What will happen if I will have the same source type but from a different time zone? I should clone the original pan:log source type with a different time zone setting and add this new source type to props/transforms.conf?
... View more
11-06-2020
03:09 AM
Dear Splunkers, Sorry about this, but I never did such thing before... My Splunk is in EU and now I added PaloAlto firewall logs (collected by a Syslog and UF pushing them to Splunk) from AUS. The timestamping is wrong. First of all the today's events (11/06) are indexed on11th of Jun (06/11). On the top, it is indexed two hours ahead than the current time. now the events look like this : 11/06/2020 13:45:43.000 06-11-2020 21:45:43 User.Info 10.180.160.41 Nov 6 21:45:43 Firewall.device.name 1, .......................................................... I'm using the Palo Alto add-on default for the source type, just the time zone changed to Sydney. (Timestamp prefix : ^(?:[^,]*,){5} ; Lookahead 100) Could you please advise what I should do? (what will happen if I will have the same source type logs to the same index but from a different timezone? ) Regards, Norbert
... View more
Labels
- Labels:
-
props.conf
-
sourcetype
10-02-2020
02:48 AM
Thank you! #imsonoob 🙂
... View more
10-02-2020
02:33 AM
Hi, I have a Dasboard for user activity and I would like to add usernames with a dropdown input. <fieldset submitButton="true" autoRun="false">
<input type="dropdown" token="user">
<label>Username / Identity</label>
<search>
<query>`umbrella` | stats count by identities | sort + identities | fields - count</query>
<earliest>-24h@h</earliest>
<latest>now</latest>
</search>
<fieldForLabel>user</fieldForLabel>
<fieldForValue>user</fieldForValue>
</input> Search results with the user's list. Rest of the boards are using `umbrella` identities="$user$" ... searches (if I add a text input and username manually all working fine) But... The dropdow input still appears inactive - greyed out and mouse over show red crossed out circle 😞 What I missed? (In other app same thing working fine, naturally I'm admin in this app, so I doubt it's permission)
... View more
07-16-2020
01:46 AM
Thanks, Looks like there is no easy way:) Actually I try my theory and rename the app before install. It is working except for the update...
... View more
07-16-2020
01:27 AM
Thank you, ... in this case for me, who had access both indexes will see a "mixture" of all logs in the app, right? (that not be good at all...) Could install the same app if I modify the app's folder's name and the app.conf in the tgz file before? Do I need to change anything else?
... View more
- « Previous
-
- 1
- 2
- Next »