All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to integrate M365 into Linux-based Splunk instance. Could somebody assist me with the architecture's operation? How is the API call handled? What kind of network data can be seen (specific... See more...
I want to integrate M365 into Linux-based Splunk instance. Could somebody assist me with the architecture's operation? How is the API call handled? What kind of network data can be seen (specific user accessing M365)? Is it possible to keep an eye on the user's activities and the network (such as ports details of various services offered by M365)?  I want to understand Splunk's working. It would be great if someone help me. Thank you. Regards, Ash
Hi Gurus,  Greetings. Please advise whether Splunk HEC endpoint can ingest proto buff msgs? Parse the pb message(using configured schema) and convert it into format that is compatible with index and... See more...
Hi Gurus,  Greetings. Please advise whether Splunk HEC endpoint can ingest proto buff msgs? Parse the pb message(using configured schema) and convert it into format that is compatible with index and search heads. If already there is app for it, please point me to it. If not please share the app sdk doc, that let me to add custom logic before indexing, if applicable. Thanks in advance!
My Splunk query: index=aaa sourcetype="bbb" | bucket _time span=1d|stats count by data_date,Name,RecordCount limit=0| lookup abc.csv Names as Name|eventstats sum(RecordCount) as Total by Name, _time|... See more...
My Splunk query: index=aaa sourcetype="bbb" | bucket _time span=1d|stats count by data_date,Name,RecordCount limit=0| lookup abc.csv Names as Name|eventstats sum(RecordCount) as Total by Name, _time|eval p=round((Total/Threshold)*100,2)|chart values(p) over Name BY data_date limit=0   data_date and _time represent same day(value) in yyyy-mm-dd format. abc.csv has list of all Names and associated threshold values stored. When I select time picker as last 7 days and use below setting in xml, displays table as expected. Can someone please help me in extending this coloring to all columns, considering that this date value(column header value) changes whenever user changes value in datapicker.  <format type="color" field="25-03-2023">    <colorpalette type="list">[#FF0000,#FFF000,#55C169,#55C169]</colorpalette >     <scale type="threshold">90,95,100</scale> </format>
This post is regarding moving configurations from one Splunk environment to another. I work on the support side of the corporation where I work. We rely on Splunk for reports, alerts, event types, da... See more...
This post is regarding moving configurations from one Splunk environment to another. I work on the support side of the corporation where I work. We rely on Splunk for reports, alerts, event types, data models, tags, ect. We are not responsible for the hands on work of maintaining Splunk and rely on an entirely different business unit for that. They can be territorial, siloed and difficult to deal with. We desperately want the hard work of our support teams to utilize Splunk to be carried over to the new environment. This is the reason I am here. I am asking for guidance on what is needed and where to find the configs that are needed to be migrated to the new environment. The current environment consists of 3 clustered search heads, 2 indexer clusters of 2 indexers each (4 total). I have tried previously to guide others in the other business unit to where the configs are without luck. My only reference is a Splunk Enterprise instance on my laptop. Since I am not allowed to look at or touch these servers I have to guide others on how to do it. I have zero information about the target environment for the migration unfortunately. We also have an issue where the Splunk forwarder agent on a particular server that becomes non-responsive from the amount of logs events ingested. Any suggestions on how to fix this are appreciated. Regards. Forgot to add that this concerns Splunk Enterprise Version:7.2.3
I am facing an issue in which Splunk logs multiple lines as a single event- The timestamp seems to be different,  I've attached logs for the same.     21:20:14,817 INFO [exec-68932] Intercep... See more...
I am facing an issue in which Splunk logs multiple lines as a single event- The timestamp seems to be different,  I've attached logs for the same.     21:20:14,817 INFO [exec-68932] Interceptor.setParameters(Interceptor.java:223) - set setParameters to decode access token and id token 21:20:14,817 INFO [exec-68932] Interceptor.setParameters(Interceptor.java:253) - cached id 21:20:14,820 INFO [exec-68932] preferences(Controller.java:95) - Get Customer id 21:20:14,820 INFO [exec-68932] preferences(Controller.java:107) - User obtained 21:20:14,820 INFO [exec-68932] preferences(Controller.java:113) - Get flag 21:20:14,820 INFO [exec-68932] preferences(Controller.java:114) - method=GET:type=Start 21:20:14,820 INFO [exec-68932] getService(userService.java:269) - turnOn Variable 21:20:14,836 INFO [exec-68948] Interceptor.preHandle(Interceptor.java:71) - Entered Prehandle - Interceptor 21:20:14,836 INFO [exec-68948] Interceptor.getCode(Interceptor.java:183) - Constructed url 21:20:14,849 INFO [exec-68932] callPost(RestClientUtil.java:104) - callPost(): Excecution Time=28 ms 21:20:14,850 INFO [exec-68932] getPropertiesService(userService.java:217) - get uiflag 21:20:14,850 INFO [exec-68932] getPropertiesService(userService.java:220) - getService(): Excecution Time=29 ms 21:20:14,850 INFO [exec-68932] getService(userService.java:280) - success from service 21:20:14,850 INFO [exec-68932] getService(userService.java:427) - Ui flag:O 21:20:14,851 INFO [exec-68932] getService(userService.java:428) - FlagActivate:true 21:20:14,851 INFO [exec-68932] getService(userService.java:512) - true:false 21:20:14,851 INFO [exec-68932] getService(userService.java:548) - email address: 21:20:14,851 INFO [exec-68932] preferences(Controller.java:127) - method=GET:elapsed=31ms:type=End 21:20:14,851 INFO [exec-68932] preferences(Controller.java:135) - Redirect url --- 21:20:14,915 INFO [exec-68950] Interceptor.preHandle(Interceptor.java:71) - Entered - Interceptor 21:20:14,916 INFO [exec-68943] Interceptor.preHandle(Interceptor.java:71) - Entered - Interceptor 21:20:14,916 INFO [exec-68943] Interceptor.getCode(Interceptor.java:183) - Constructed url 21:20:15,123 INFO [exec-68948] Interceptor.setParameters(Interceptor.java:223) - set setParameters to decode access token and id token 21:20:15,124 INFO [exec-68948] Interceptor.setParameters(Interceptor.java:253) - cached id 21:20:15,125 INFO [exec-68948] preferences(Controller.java:95) - Get Customer id     Any help would be appreciated.   Thanks, Neenu
Hi all, I hope somebody can help. I'm looking to create a search based on the following in a Windows event log.  I'm not even sure it's referred to as a compounded search and If that's wrong in t... See more...
Hi all, I hope somebody can help. I'm looking to create a search based on the following in a Windows event log.  I'm not even sure it's referred to as a compounded search and If that's wrong in the splunk world, what is the correct term?  It seems my googling skills have failed me this time round. EventID-5145 and RelativeTargetName={srvcsvc or lsarpc or samr} and at least 3 occurences with different RelativeTargetName and Same (Source IP, Port) and SourceUserName not like "*DC*$" within 1 minute Thanks in advance
i have many alerts and reports which configured with particular email id(splunkdata@gmail.com) Now i want to change the email id to (splunklogs@gmail.com) How do i  get the list of alerts and repor... See more...
i have many alerts and reports which configured with particular email id(splunkdata@gmail.com) Now i want to change the email id to (splunklogs@gmail.com) How do i  get the list of alerts and reports configured with this email id(splunkdata@gmail.com)
The REST API seems to return default values for max_searches_per_cpu, while the btool command brings back the actual values.     | rest splunk_server=* /services/search/concurrency-settings/sea... See more...
The REST API seems to return default values for max_searches_per_cpu, while the btool command brings back the actual values.     | rest splunk_server=* /services/search/concurrency-settings/search | table *     Any thoughts?  
Hello, I'm new to Splunk, and have been working on setting up some dashboards. I figured I'd use the fancy new Dashboard Studio, since I assume that's the direction people are expected to go (And I l... See more...
Hello, I'm new to Splunk, and have been working on setting up some dashboards. I figured I'd use the fancy new Dashboard Studio, since I assume that's the direction people are expected to go (And I like JSON a lot more than XML). After setting up a couple of line charts, I tried to add a drilldown to one of them (For a simple test, just pointing to https://www.google.com), and the clicks would not do anything. Not on the legend, not on the lines, points, axes of my line chart. Nothing. Tried saving it as an area chart, a bar chart, same deal. For comparison, I made a line chart on a classic dashboard using the exact same query and drilldown parameters, and the clicks would bring me to a new page. Similarly, I created a Statistics Table on Dashboard Studio and noticed drilldown works in that situation as well. Seems to be limited to just Visual Charts + Dashboard Studio = No drilldown, and I'm not sure what I can do to resolve this.
When producing reports, the recipients receive only the first 100 lines, is this a known limitation?
Hello Everyone,   I have an alert that runs every 15 minutes and checks logs for last 15 minute time span. I want the alert to not run for 1:30 am cycle.  Currently I am using the cron expressi... See more...
Hello Everyone,   I have an alert that runs every 15 minutes and checks logs for last 15 minute time span. I want the alert to not run for 1:30 am cycle.  Currently I am using the cron expression */15 0-1,2-23 * * * But this will skip all schedules between 1 -2 am [1:00, 1:15, 1:30, 1:45] Is there anyway I can only skip the alert scheduled at 1:30 am (searches time range 1:15:00 to 1:30:00 ) within one cron schedule.  I know it can be done easily with 2 schedules but was wondering if this can be achieved within one CRON expression.   Thanks.
Hi ,   I am getting below error when I ran "splunk apply shclusrer-bundle" on Deployer    Error in pre-deploy check, uri=?/services/shcluster/captain/kvstore-upgrade/status, status=502, error... See more...
Hi ,   I am getting below error when I ran "splunk apply shclusrer-bundle" on Deployer    Error in pre-deploy check, uri=?/services/shcluster/captain/kvstore-upgrade/status, status=502, error=Cannot resolve hostname   Can anyone help what might be the issue 
IP scanners use cases using spl query I'm new to the splunk and I'm trying to find the spl query for the use cases IP scanners many ips on single port IP scans single ips on many ports IP s... See more...
IP scanners use cases using spl query I'm new to the splunk and I'm trying to find the spl query for the use cases IP scanners many ips on single port IP scans single ips on many ports IP scans many ips on many ports How could we achieve this using SPL query Thanks
Hi,   I am trying to find a query to extract specific code from the raw splunk data. Below is the raw content. raw: [demo] FATAL com.test.data - ***** Major issue error: xyz: Completion Code '1... See more...
Hi,   I am trying to find a query to extract specific code from the raw splunk data. Below is the raw content. raw: [demo] FATAL com.test.data - ***** Major issue error: xyz: Completion Code '1', Reason '111'   I need to extract the data "Major issue error:xyz". Please help to me extract it.   Thanks, Raj.
I am using the below cluster search  | cluster t=0.1 showcount=t countfield=no_of_events | table _time,no_of_events _raw | sort -no_of_events | dedup no_of_events in the output iam getting the en... See more...
I am using the below cluster search  | cluster t=0.1 showcount=t countfield=no_of_events | table _time,no_of_events _raw | sort -no_of_events | dedup no_of_events in the output iam getting the entire raw message as in the table. however, i want to show only the error message. Is there any way to extract only specific messages instead of full raw message  
So I'm fairly new to using data models for my visuals, and converting my network performance dashboard to summarized data model searched. My visuals for error rate and request volume worked easily, b... See more...
So I'm fairly new to using data models for my visuals, and converting my network performance dashboard to summarized data model searched. My visuals for error rate and request volume worked easily, but latency not so much. Here's the search I am attempting: | tstats summariesonly=t avg(All_Performance.Network.latency) from datamodel=Performance.All_Performance where nodename=All_Performance.Network BY _time span=15s   here are searches that work: | tstats summariesonly=t count(All_Performance.Network.latency) from datamodel=Performance.All_Performance where nodename=All_Performance.Network BY _time span=15s   | from datamodel Performance.Network | timechart span=15s avg(latency) as latency   Can someone explain why tstats count() works here, but not min, avg, max, etc?
 I have some JSON (raw event) like below, this is one event: {     "place": "bar",     "stock": [                      {                        "brand": keith                        "type... See more...
 I have some JSON (raw event) like below, this is one event: {     "place": "bar",     "stock": [                      {                        "brand": keith                        "type": drink                        "owner": Tom                       }                       {                        "brand": qfarm                        "type": food                        "owner": Mike                       }                      {                        "brand": blue                        "type": drink                        "owner": Jerry                       }                       {                        "brand": redriver                        "type": food                        "owner": Don                       }                    ] }   System already extracted field “place”, “brand”, “type”, “owner”. What I would like is to extract “brand” into new field “brand_drink” or “brand_food” depends on “type” is drink or food. And do the same for “owner”. In this example, there’s 4 items under “stock”, there’s other events have more or less which might have to use loop.   Been struggling with this. Can someone help please?
I have a UF installed on an endpoint and plan to do more, but whenever the endpoint (laptop) is offline I get missing forwarder alerts from DMC. This will happen frequently since endpoints are shutdo... See more...
I have a UF installed on an endpoint and plan to do more, but whenever the endpoint (laptop) is offline I get missing forwarder alerts from DMC. This will happen frequently since endpoints are shutdown for the night/weekend, offline for travel, in for repair/replacement, etc. Is there any way to group forwarders in DMC and set alert thresholds or settings to get immediate and continuous alerts about some systems like servers, but not others like endpoints?
We recently done splunk upsize - The instance type have changed from c6i.4xlarge to m6i.8xlarge for AdHoc SH. We are increasing from  c6i.4xlarge with 16vCPU & 32 Memory (GiB) & after upgrading insta... See more...
We recently done splunk upsize - The instance type have changed from c6i.4xlarge to m6i.8xlarge for AdHoc SH. We are increasing from  c6i.4xlarge with 16vCPU & 32 Memory (GiB) & after upgrading instance type (m6i.8xlarge) they will get 32 vCPU & 128 Memory (GiB).  Where we can see this changes , is there any command or dashboard can help to validate this .  We tried monitoring console 
Hello, Any Splunkers who can share ALL of the <option name=""> tags available to use with the missile map viz? On Splunkbase, the author states  Customisation options The following options ar... See more...
Hello, Any Splunkers who can share ALL of the <option name=""> tags available to use with the missile map viz? On Splunkbase, the author states  Customisation options The following options are available to customise: Lines Default color: The color to use for a line when no "color" field is present in the data (default: #65a637) Weight: The weight to use for a line when no "weight" field is present in the data (default: 1) Map Tile set: The map tiles to use Custom tile set: If you wish to use a tile set not in the preset list (e.g. http://tile.stamen.com/toner/{z}/{x}/{y}.png) Latitude: Starting latitude to load Longitude: Starting longitude to load Zoom: Starting zoom level to load However, there is no sample of the syntax. Thanks in advance for your help. God bless, Genesius