All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

    14/01/2025 15/01/2025 16/01/2025 17/01/2025 18/01/2025 19/01/2025 20/01/2025 21/01/2025 22/01/2025 05/01/2025                     06/01/2025                 ... See more...
    14/01/2025 15/01/2025 16/01/2025 17/01/2025 18/01/2025 19/01/2025 20/01/2025 21/01/2025 22/01/2025 05/01/2025                     06/01/2025                     07/01/2025                     08/01/2025                     09/01/2025                     10/01/2025 x                   11/01/2025 x                   12/01/2025 x                   13/01/2025 x                   14/01/2025                     15/01/2025                     16/01/2025 x                   17/01/2025                     18/01/2025 x                   19/01/2025 x                   20/01/2025 x                   21/01/2025 x                   22/01/2025                     Here is a simple table with dates and whether the user as accessed the account (marked with an 'x') Across the top are the date of when the report is run looking back 10 days including the day the report is run. What do you expect the count to be for each of those days? Do you expect a single count at the end for the whole period? What does that count represent and why? Please fill in all the detail.
If we follow the cycle of 10 as you said (N) and 4 as the number of days (M) (which is also the number of times, because the same person or department accessing an account on the same day is recorded ... See more...
If we follow the cycle of 10 as you said (N) and 4 as the number of days (M) (which is also the number of times, because the same person or department accessing an account on the same day is recorded as 1 day) Assuming that in the first 10 days of today, the same person and department accessed an account on the same day, and the number of visits per day is counted as 1, and the final value is greater than 4. To achieve this, if I enlarge the time interval, I will append the results of each 10 day period separately.
TLDR; check your file permissions.   A bit late to the party, but this one had me swearing for a a few hours. I work for an MSP, and manage several separate Splunk Enterprise environments. After th... See more...
TLDR; check your file permissions.   A bit late to the party, but this one had me swearing for a a few hours. I work for an MSP, and manage several separate Splunk Enterprise environments. After the latest upgrade to 9.3.2, I noticed that for a few of them the DS was "broken".  Checked this post and others and went over configs side-by-side to see if there were any differences in outputs.conf, distsearch.conf, indexes.conf etc. There shouldn't really be many of those, since almost all config is done through puppet, and there are not so many reasons to change settings on an individual server basis. All types of internal logs are forwarded to the indexer cluster. Always.  Turns out that the upgrade process (ours or splunk's) had left /opt/splunk/var/log/client_events owned by root, with 700-permissions. No wonder that the files weren't even written to begin with... I suspect  on those environments that did work, I had out of habit run a chown -R splunk:splunk to ensure that I hadn't messed up something somewhere. Lesson: check the obvious stuff first.
Why is  Start time: 7, end time: 16, name, department, account number: 5 when you don't know on the 16th that this is the start of another set of 4 consecutive accesses? What would you get if it w... See more...
Why is  Start time: 7, end time: 16, name, department, account number: 5 when you don't know on the 16th that this is the start of another set of 4 consecutive accesses? What would you get if it was accessed on 10, 11, 12, 13, 16, 18, 19, 20, 21?
If the same person and department access the same account for 2 consecutive 4-day periods, they will receive: Start time: 5, End time: 14, Name, Department, Account, Number of occurrences: 4 Start ti... See more...
If the same person and department access the same account for 2 consecutive 4-day periods, they will receive: Start time: 5, End time: 14, Name, Department, Account, Number of occurrences: 4 Start time: 6, end time: 15, name, department, account number: 4 Start time: 7, end time: 16, name, department, account number: 5 Start time: 8, end time: 17, name, department, account number: 6 Start time: 9, end time: 18, name, department, account number: 7 Start time: 10, end time: 19, name, department, account number: 8 Start time: 11, end time: 20, name, department, account number: 7 Start time: 12, end time: 21, name, department, account number: 6 Starting time: 13, ending time: 22, name, department, account number: 5 Starting time: 14, ending time: 22, name, department, account number: 4 Start time: 15, end time: 23, name, department, account number: 4 Start time: 16, end time: 24, name, department, account number: 4 Because our data is collected today and yesterday, according to what you said, 10 is the cycle (N) and 4 is the number of days (M) (also the number of times, because the same person or department accessing one account on the same day is recorded as 1 day)
One more comment which is mandatory to know. You cannot manage DS itself with DS functionality! Don't even try it!!! For that reason it's good to use dedicated DS server if/when you have several cl... See more...
One more comment which is mandatory to know. You cannot manage DS itself with DS functionality! Don't even try it!!! For that reason it's good to use dedicated DS server if/when you have several clients to manage. DS can be a physical or virtual server. It's no mater if there are enough resources for it. Currently you can even make pool of DSs as working like one. If you are Splunk Cloud customer you can order dedicated DS license from Support by creating a service ticket. I have never try if I can do this also as Splunk Enterprise customer too? After 9.2 there are some new configuration options what you must to do in DS especially if/when you are forwarding it's log to centralized indexers.
OK so what would you get if there were two periods of 4 consecutive days (10, 11, 12 and 13, and 16, 17, 18 and 19)?
Yes, what I want to achieve is to count the alarm results for every first 7 days plus 1 day
OK so the number of "visits" is because the 10 day periods 6-15, 7-16, 8-17, 9-18 and 10-19 all contain the same period of 4 consecutive visits (10, 11, 12 and 13)?
Because your condition is that M is 4, I will sound an alarm when the user accesses the same account more than 4 times in a row
Ahhh sorry @harishsplunk7 - I misread! I've rearranged the query a bit now, how does this look? | rest splunk_server=local "/servicesNS/-/-/data/ui/views" | rename title as dashboard, eai:acl.app ... See more...
Ahhh sorry @harishsplunk7 - I misread! I've rearranged the query a bit now, how does this look? | rest splunk_server=local "/servicesNS/-/-/data/ui/views" | rename title as dashboard, eai:acl.app as app | fields dashboard app | eval isDashboard=1 | append [ search index=_internal sourcetype=splunkd_ui_access earliest=-90d@d uri="*/data/ui/views/*" | rex field=uri "/servicesNS/(?<user>[^/]+)/(?<app>[^/]+)/data/ui/views/(?<dashboard>[^\.?/\s]+)" | search NOT dashboard IN ("search", "home", "alert", "lookup_edit", "@go", "data_lab", "dataset", "datasets", "alerts", "dashboards", "reports") | stats count as accessed by app, dashboard ] | stats sum(accessed) as accessed, values(isDashboard) as isDashboard by app, dashboard | fillnull accessed value=0 | search isDashboard=1 accessed=0 Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Please explain why the number of visits is 5 when the user has only accessed the account for the first 4 days (presumably 10th, 11th, 12th and 13th)?
hmm, Is your ES rule looking at All Time? If so, does it need to? This could chew up quite a bit of resource.
This one actually fixed the issue been working on this over a day without a solution
Hi Peers. How is the compatibility between OTel collector and AppDynamics, is it efficient and recommendable? If we use Otel collector for data exporting to AppD..is it still required to use AppD a... See more...
Hi Peers. How is the compatibility between OTel collector and AppDynamics, is it efficient and recommendable? If we use Otel collector for data exporting to AppD..is it still required to use AppD agents also? How the Licensing will work if we use Otel for data exporting to AppD? Otel collector is compatible with both on premise and SaaS Environment of AppD?. Thanks
Hi @Praz_123 , if you'r speaking of ulimit of Splunk Servers, you can use the Monitoring Console health Check. If you're speking of Forwarders (Universal or Heavy it's the same), there's no direct ... See more...
Hi @Praz_123 , if you'r speaking of ulimit of Splunk Servers, you can use the Monitoring Console health Check. If you're speking of Forwarders (Universal or Heavy it's the same), there's no direct solution and you should use the solution from @livehybrid: a shall script input (to insert in a custom add-on) that extract this value and sends it to the Indexers. Ciao. Giuseppe
Hi @raleighj , I suppose that you're using Enterprise Security, if yes, see the Manager Security Posture Dashboard to have these information. Ciao. Giuseppe
Hi @kiran_panchavat    Thanks for your response. Which server contains the `passwords.conf` file for Qualys TA (TA-QualysCloudPlatform)? I couldn't find it on the Heavy Forwarder (HF).
i having some issues to populate the traffic center dashboard in splunk ES. It's showing as "Cannot read properties of undefined (reading 'map')". anyone have any solutions?
Hi @anglewwb35 , adding only few additional information to all the ones of @livehybrid and @kiran_panchavat: The limit of a dedicated Deployment Server is 50 clients to manage: if it has to manage... See more...
Hi @anglewwb35 , adding only few additional information to all the ones of @livehybrid and @kiran_panchavat: The limit of a dedicated Deployment Server is 50 clients to manage: if it has to manage more than 50 clients it must be dedicated; in addition even if it has to manage less than 50 clients it's also relevant the load on the HF because the DS role is an heavy job for the machine and you could compromize the parsing activities done by the HF. Then, on the Deployment Server you need a license, so you should connect it to your License Manager (not using a dedicated or a Free License), anyway it doesn't consume license because it doesn't locally index nothing, infact it's a best practice to forward all internal logs of all machines of the Splunk infrastructure to the Indexers. On the Heavy Forwarder, you can use a Forwarder License (not a Free License!), but only if you perform a normal forwarding, if you need e.g. to use DB-Connect, you need a License, so you have to connect also HFs to the License Manager. Ciao. Giuseppe