All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi gcusello! That worked just the way I wanted to. Thanks for the support. Steve
Hi @gcusello , Thank you, this is a start. Indeed, I find the time but I only have 1 value displayed. I would like to be able to keep the top 5 peaks per day of the last x days. Thanks!
@isoutamo This worked perfectly!  Thank you for your input.  Seems the `source` monitor stanza was the way to go.  Here is my final configuration for future Splunkers that want to accomplish the same... See more...
@isoutamo This worked perfectly!  Thank you for your input.  Seems the `source` monitor stanza was the way to go.  Here is my final configuration for future Splunkers that want to accomplish the same. [source::.../var/log/splunk/splunkd*] SEDCMD-url = s/https?:\/\/www.domain.com\/(.*)/https:\/\/www.domain.com\/XXXX-XXXX-XXXX/g
Hi @Splunked_Kid , you could try something like this: index=myindex | bin span=1m _time | stats sum(MIPS) as MIPSParMinute by _time | eventstats max(MIPS) AS max_MIPS | where MIPSParMinute=max_MIP... See more...
Hi @Splunked_Kid , you could try something like this: index=myindex | bin span=1m _time | stats sum(MIPS) as MIPSParMinute by _time | eventstats max(MIPS) AS max_MIPS | where MIPSParMinute=max_MIPS | eval Day=strftime(_time,"%Y/%m/%d") | eval Hour=strftime(_time,"%H:%M") | table Day Hour MaxMIPSParMinute Ciao. Giuseppe
Hello, I'm trying to add up the MIPS of each of the partitions per minute and then keep only the maximum MIPS per day but I'd like to display the time and minutes at which this peak arrived. How do I... See more...
Hello, I'm trying to add up the MIPS of each of the partitions per minute and then keep only the maximum MIPS per day but I'd like to display the time and minutes at which this peak arrived. How do I do it? Here's my search: First, I want to make the addition of the MIPS for all partition per minute. Second, I want to keep only the max value per day of the prior addition.     index=myindex  | bin span=1m _time | stats sum(MIPS) as MIPSParMinute by _time | timechart span=1d max(MIPSParMinute) as MaxMIPSParMinute | eval Day=strftime(_time,"%Y/%m/%d") | eval Hour=strftime(_time,"%H:%M") | sort 0 - MaxMIPSParMinute Day | dedup Day | table Day Hour MaxMIPSParMinute Unfortunaly, in my result I loose the hour and minute of when this peak occurs in the day.  Is there a way of keeping the hours and minute value?    Thanks!
We are implementing an app to collect large csv report via python script but the interval in seconds period is not a good solution for us. Is it expected there to be cronjob option for collection int... See more...
We are implementing an app to collect large csv report via python script but the interval in seconds period is not a good solution for us. Is it expected there to be cronjob option for collection intervals  in future versions. please let me know. thanks in advance
Hi @kzjbry1 , at first check if all the features of this dashboard in Classic Dashboard are also present in Dashboard Studio  version because there are some features that aren't still migrated, for ... See more...
Hi @kzjbry1 , at first check if all the features of this dashboard in Classic Dashboard are also present in Dashboard Studio  version because there are some features that aren't still migrated, for this reason I didn't still passed to Dashboard Studio. Anyway, if you cloned the dashboard to Dashboard Studio, to use the new dashboard instead of the original, you have also to modify the app menu in [Settings > User Interface > app menu] and in eventual drilldowns (if present). Ciao. Giuseppe
Can anyone tell me how to migrate a Microsoft Azure App for Splunk dashboard (security_center_alerts) from the original Classic format to Dashboard Studio? I realize I can clone the dashboard, but a... See more...
Can anyone tell me how to migrate a Microsoft Azure App for Splunk dashboard (security_center_alerts) from the original Classic format to Dashboard Studio? I realize I can clone the dashboard, but am not sure how to have the app recognize the migrated dashboard instead of the original one. Also, is there a way to add a new local dashboard to the app dropdown menus? Thanks in advance! Steve  Cook
Hi, struggling to get single values to show with trendline comparing to previous month.   | bin span=1mon _time | chart sum(cost) as monthly_costs over bill_date   Tried differnt varations. The ... See more...
Hi, struggling to get single values to show with trendline comparing to previous month.   | bin span=1mon _time | chart sum(cost) as monthly_costs over bill_date   Tried differnt varations. The above will show a single value for each month, but I want to add a trendline to the single value to compare to the previous month. Any ideas? Thanks!
One more addition: For testing we also now wrote a config for the collector that exports traces to both splunk and Jaeger at the same time, so we could see if it was just the collector not registeri... See more...
One more addition: For testing we also now wrote a config for the collector that exports traces to both splunk and Jaeger at the same time, so we could see if it was just the collector not registering span-links in general or something else down the line. When doing this, the span links appeared in Jaeger - but were still not visible in Splunk. So we think it's either the (default) configuration for the splunk-opentelemetry-collector (specifically the splunk export) not handling span-links, or something in the Observability Cloud not accepting our span-links. There is one more hint I got from the collector container logs, following message that appears a few seconds after the traces are sent: 2025-01-27T14:35:00.707Z info transport/http2_server.go:662 [transport] [server-transport 0xc0022ba000] Closing: EOF {"grpc_log": true} 2025-01-27T14:35:00.707810569Z 2025-01-27T14:35:00.707Z info transport/controlbuf.go:577 [transport] [server-transport 0xc0022ba000] loopyWriter exiting with error: transport closed by client {"grpc_log": true} This might indicate that the writer is stuck at sending the span links, since everything else is sent to splunk correctly. However, I am not sure about the inner workings there. I would appreciate any hints for debugging this!
Hi @jkamdar , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Thanks, appreciate the help.
I don't see the span links in my trace view either.  This is strange. I've noticed in your example that you use the port 4318 in the collector url and thought that maybe the problem is somehow relate... See more...
I don't see the span links in my trace view either.  This is strange. I've noticed in your example that you use the port 4318 in the collector url and thought that maybe the problem is somehow related to the grpc protocol. But when I configure the "http/protobuf" with port 4318 in the local app, it doesn't change anything. The links just don't appear in the UI. Here is the Splunk OTel Collector configuration usiing docker compose: otelcol: image: quay.io/signalfx/splunk-otel-collector:latest environment: - SPLUNK_ACCESS_TOKEN=XXXXXXXXXXXXXXXXX - SPLUNK_REALM=eu1 ports: - "13133:13133" - "14250:14250" - "14268:14268" - "4317:4317" - "4318:4318" - "6060:6060" - "8888:8888" - "9080:9080" - "9411:9411" - "9943:9943" container_name: otelcol restart: unless-stopped
Hi @jkamdar , About the apps to migrate, I mean all the apps not contained in the Splunk installation, if you install the same version of Splunk, you could copy the full $SPLUNK_HOME/etc/apps folder... See more...
Hi @jkamdar , About the apps to migrate, I mean all the apps not contained in the Splunk installation, if you install the same version of Splunk, you could copy the full $SPLUNK_HOME/etc/apps folder. beware to the last point: if in your apps there is some path, you have to manually modify paths to adapt them from linux to Windows, e.g. splunk internal logs must be moved from /opt/splunk/var/log/splunk to C:\Program Files\splunk\var\log\splunk. Ciao. Giuseppe
also forgot to type this in my example search, but most of my queries for these alerts use Latest=@h, keeping the window the same  
Makes sense https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html Security NoNewPrivileges= Takes a boolean argument. If true, ensures that the service process and all its chil... See more...
Makes sense https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html Security NoNewPrivileges= Takes a boolean argument. If true, ensures that the service process and all its children can never gain new privileges through execve() (e.g. via setuid or setgid bits, or filesystem capabilities). This is the simplest and most effective way to ensure that a process and its children can never elevate privileges again. Defaults to false. In case the service will be run in a new mount namespace anyway and SELinux is disabled, all file systems are mounted with MS_NOSUID flag. Also see No New Privileges Flag.
Hi! thanks for the response, like you predicted, the time frame is no where I am facing issue with my search, so it must be something to do with latency like you said. Is there any ways to change how... See more...
Hi! thanks for the response, like you predicted, the time frame is no where I am facing issue with my search, so it must be something to do with latency like you said. Is there any ways to change how the search is run? and by two alerts, do you mean running different timed alerts, or separate queries?
Thanks @isoutamo 
The very ugly solution would be to search for the "initial" results, then do fillnull and then search for particular values. But. That would be hopelessly ineffective because you'd need to dig thro... See more...
The very ugly solution would be to search for the "initial" results, then do fillnull and then search for particular values. But. That would be hopelessly ineffective because you'd need to dig through all events each time you run your search. If the search is meant to be run relatively often you could think of summary indexing and transform your data so that it contains some default "non-present" entry.
Thanks @gcusello  Yes, it's a stand alone server.  My comments/questions in-line below start from the same Splunk Version - Yes, good point, will do that copy the apps from the old to the new on... See more...
Thanks @gcusello  Yes, it's a stand alone server.  My comments/questions in-line below start from the same Splunk Version - Yes, good point, will do that copy the apps from the old to the new one - Are you referring to apps like add-ons, Splunk_TA_nix and Splunk_TA_windows? modify eventual monitor inputs using the new path - Do you mean update inputs.conf?