All Topics

Top

All Topics

Hi, How can I transform a table, so that the result would look something like this  
On Thursday, Aug. 25th, the Splunk Log Observer team is making a change that will speed up most Log Observer and Log Observer Connect queries, which may cause the logs table to show more results than... See more...
On Thursday, Aug. 25th, the Splunk Log Observer team is making a change that will speed up most Log Observer and Log Observer Connect queries, which may cause the logs table to show more results than before.  A screenshot of Splunk Log Observer, pointing out the logs table and the group-by control for visual analysis. What’s changing?  Today, the logs table in the Log Observer UI only shows log messages containing the fields that the visual analysis is grouped by. For example, in the screenshot above, the visual analysis is grouped by ‘severity’. On Thursday, Aug. 25th, we will change the logs table to show log messages with or without the fields that have been used to group the visual analysis. After Thursday, Aug. 25th, when you group by a field in Log Observer, the logs table can include messages with or without that field. If you want to ensure that the logs table only shows messages that include the fields that have been used to group the visual analysis, add a filter like “example=*”. To add a filter in Log Observer, click the Add Filter button and search or browse for a field. Click "Include all logs with this field" to add a filter like "severity=*" as shown. Why are we changing it? This change will make the average Log Observer query faster and more efficient. It also makes it easier to understand what Log Observer is doing in each query. The logs table can always be returned to its original behavior by adding filters like “example=*” to the filter bar.  What effects will this change have in Log Observer? After this change, your saved Log Observer queries that group by some field may start to include more results in the logs table than they used to, because the results can now include logs with or without that field.  If you want to ensure that the logs table will only show messages that are included in the groups shown in visual analysis, add a filter like “example=*” to the filter bar to filter by that field.  How does this work today, before the change?  You can use the group by dropdown control in Log Observer to show visual analysis grouped by different fields. Before this change, when you choose a field to group by, Log Observer would only show logs containing that field, in order to ensure that the logs table would not show messages that do not appear in the Visual Analysis area. This original behavior can be restored by adding an explicit filter, like "example=*". About Splunk Log Observer Splunk Log Observer offers a no-code experience for finding and analyzing logs data integrated with Observability Cloud, for fast troubleshooting and adding to Splunk Observability Dashboards. To learn more, sign up for a free trial. — Rebecca Tortell, Principal Product Manager, Observability
Last night I installed the UF onto a system hosting some docker containers. I wanted to grab the log files without modifying the existing containers config so I created symlinks to the container logs... See more...
Last night I installed the UF onto a system hosting some docker containers. I wanted to grab the log files without modifying the existing containers config so I created symlinks to the container logs (/var/lib/docker/containers/<name>) in /var/log.  Then set the stanzas in inputs.conf to look at those symlinks. Bounced the app and waited about half an hour, nothing.  I was searching and found references to followSymlinks, so I added that to each stanza as 'true'. It's been ~7 hours and nothing yet. What did I do wrong here?
Hi All, I want to write a search which gives me total event counts for each host as per the time range picker. Additionally, I want to had two more columns which give event counts for each host in l... See more...
Hi All, I want to write a search which gives me total event counts for each host as per the time range picker. Additionally, I want to had two more columns which give event counts for each host in last 7 days and last 24 hours.  My SPL is in the format: - index="xxx" field1="dummy_value" field2="dummy_value" |stats sparkline(sum(event_count)) AS sparkline, max(_time) AS _time, sum(event_count) AS "Total_Event_Count" BY field2, field3, field4 |table field2, sparkline, field3, field4 I tried using append command but it does not help me get proper results.  Thus, I need your help to build the SPL. Thank you
I have created a number of apps and push them out using the command line.  In the serverclass.conf I want to add a restart of the forwarders.  This is what I have so far: [default] restartSplunk... See more...
I have created a number of apps and push them out using the command line.  In the serverclass.conf I want to add a restart of the forwarders.  This is what I have so far: [default] restartSplunkd = true issueReload = true [serverClass:duke_test_app:app:duke_test_app] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:duke_test_app] whitelist.0 =xxxxxx Will this work?  
"user-info" index=user_interface_type sourcetype=*  | table _time, host, port, _raw | sendemail to="abc@splunk.com" sendresults=true I use above query to list out the details for the search "user... See more...
"user-info" index=user_interface_type sourcetype=*  | table _time, host, port, _raw | sendemail to="abc@splunk.com" sendresults=true I use above query to list out the details for the search "user-info" I want to use this string "user-info" and pass it on in the title of the e-mail as : Notification received for user-info How to do that ?  
we have a DSP k8s cluster , for creating a DSP connection , is there a way for me to setup a HTTP proxy for data destination, send outputs to a destination over a HTTP proxy? It seems there is no suc... See more...
we have a DSP k8s cluster , for creating a DSP connection , is there a way for me to setup a HTTP proxy for data destination, send outputs to a destination over a HTTP proxy? It seems there is no such options on the UI, maybe it can be archived by adding http proxy Env variables into the container which is in charge of sending outgoing traffics via editing K8s Deployment, but I am not sure which Deployment should I change, or maybe add those ENV variables into the k8s master nodes…
I need to write regular expression to extract few fields in this, but not able to figure this out. Can you please help me on the same. X-Response-Timestamp: 2022-08-24T07:27:26.150Z x-amzn-Remapped... See more...
I need to write regular expression to extract few fields in this, but not able to figure this out. Can you please help me on the same. X-Response-Timestamp: 2022-08-24T07:27:26.150Z x-amzn-Remapped-Connection: close ... 4 lines omitted ... X-Amzn-Trace-Id: Root=1-6305d2de-69ec840431ff21182b4a9f68 Content-Type: application/json {"code":"APS.MPI.2019","severity":"FATL","text":"Invalid Request","user_message":"Request id has already used."} Above is the whole log. I need to extract code,severity and message. I cant able t understand the format and fetch.
Is there a way to manually trigger modular inputs using the rest API. Most of the advice I've seen involves triggering mod inputs by turning them off then on, but my setup uses cron intervals instead... See more...
Is there a way to manually trigger modular inputs using the rest API. Most of the advice I've seen involves triggering mod inputs by turning them off then on, but my setup uses cron intervals instead of seconds so it doesn't execute on enablement.
Hello Splunk Team, I want to build Dashboard on over golden signal. Can anyone help me or anyone have any prebuild dashboard. So can build my dashboard accordingly.
Hello everyone,  I have been reading the documentation for timecharts however I am a bit confused on how I can modify the encoding for the timecharts to return the second column instead of the fir... See more...
Hello everyone,  I have been reading the documentation for timecharts however I am a bit confused on how I can modify the encoding for the timecharts to return the second column instead of the first  as shown in the image or here I want to use a single value time chart to monitor the hosts reporting to me per hour and track the number of increase / decrease in them  Many thanks for all your help
greetings, i use LDAP for Authentication combined with RSA Multifactor Authentication. Is it possible to configure a exception for local users? 
  Hi, how do I display my Status Indicator with dynamic colors and icons in a Trellis layout?   | eval status=case(status_id==0,"Idle", status_id>=1 AND status_id<=3,"Setup/PM", status_id==4,"Idle... See more...
  Hi, how do I display my Status Indicator with dynamic colors and icons in a Trellis layout?   | eval status=case(status_id==0,"Idle", status_id>=1 AND status_id<=3,"Setup/PM", status_id==4,"Idle", status_id>=5 AND status_id<=6,"Down", status_id>=7 AND status_id<=8,"Idle", status_id==9,"Running") | eval color=case(status_id=0,"#edd051", status_id>=1 AND status_id<=3,"#006d9c", status_id=4,"#edd051", status_id>=5 AND status_id<=6,"#ff0000", status_id>=7 AND status_id<=8,"#edd051", status_id=9,"#42dba0") | eval icon=case(status_id=0,"times-circle", status_id>=1 AND status_id<=3,"user", status_id=4,"times-circle", status_id>=5 AND status_id<=6,"warning", status_id>=7 AND status_id<=8,"times-circle", status_id=9,"check") | stats last(status) last(color) last(icon) BY internal_name   This only displays the status with no icons and the default grey color.   Thanks.  
I have recently realized certain data models of indexes are occupying alot of disk size and have lowered the Summary Range of related Data Model from 3 months to 1 month. Currently in this scenario I... See more...
I have recently realized certain data models of indexes are occupying alot of disk size and have lowered the Summary Range of related Data Model from 3 months to 1 month. Currently in this scenario I have not seen a drop in the Disk Size usage in the Data Model screen, do I need to Rebuild and Update the acceleration? If so in which order and are there any performance or other risks involved in doing so? Thanks, Regards,
Hi All i have an exchange onprem distribution list, lets say dl@mydomain.com i want to know how many emails are triggered to this DL in one year, experts please help me with the splunk query to get... See more...
Hi All i have an exchange onprem distribution list, lets say dl@mydomain.com i want to know how many emails are triggered to this DL in one year, experts please help me with the splunk query to get this information.
We had a problem with our Microsoft Azure plugin since July. The field appliedConditionalAccessPolicies: [ [ - ] ] missing the data. We had upgraded the plugin to the latest version but still facin... See more...
We had a problem with our Microsoft Azure plugin since July. The field appliedConditionalAccessPolicies: [ [ - ] ] missing the data. We had upgraded the plugin to the latest version but still facing the same issue. Anyone facing a similar problem and please let us know if there is a fix.
Hi Splunkers, Need help on translating this search query to splunk configuration via props/transform. To give some context, the letter field was extracted via csv. "letter" field value is dynam... See more...
Hi Splunkers, Need help on translating this search query to splunk configuration via props/transform. To give some context, the letter field was extracted via csv. "letter" field value is dynamic. It should have less/more value. And the value is in the html tag format. syntax: <p>"value"</p> What would be the best practices in this scenario? Should I go with the method of search time or via index time? Sample query: | makeresults | eval letter = "<p>A</p><p>B</p><p>C</p><p>D</p>" | eval letter = replace(letter,"<p>","") | eval letter = replace(letter,"</p>","__") | makemv delim="__" letter Expected output: letter A B C D
Hi Guru,  How do we exclude 0% process usage from Hostmetrics? We would like to capture those process have >0% usage only.. Appreciate if you can provide the sample.  hostmetrics: collection_i... See more...
Hi Guru,  How do we exclude 0% process usage from Hostmetrics? We would like to capture those process have >0% usage only.. Appreciate if you can provide the sample.  hostmetrics: collection_interval: 10s scrapers: # System processes metrics, disabled by default process:    (filter / exclude 0% process usage)
Hi peeps, Need help in extracting some fields; Sample logs: Aug 24 09:30:43 101.11.10.01 CEF:0|KasperskyLab|SecurityCenter|13.2.0.1511|GNRL_EV_ATTACK_DETECTED|Network attack detected|4|msg=User... See more...
Hi peeps, Need help in extracting some fields; Sample logs: Aug 24 09:30:43 101.11.10.01 CEF:0|KasperskyLab|SecurityCenter|13.2.0.1511|GNRL_EV_ATTACK_DETECTED|Network attack detected|4|msg=User: NT AUTHORITY\\SYSTEM (System user)\r\nComponent: Network Threat Protection\r\nResult description: Blocked\r\nName: Scan.Generic.PortScan.TCP\r\nObject: TCP from 101.11.10.01 at 101.11.10.01:25\r\nObject type: Network packet\r\nObject name: TCP from 101.11.10.01 at 101.11.10.01\r\nAdditional: 101.11.10.01\r\nDatabase release date: 23/8/2022 12:26:00 PM rt=1661304218000 cs9=Workstation cs9Label=GroupName dhost=082HALIM141 dst=101.11.10.01 cs2=KES cs2Label=ProductName cs3=11.0.0.0 cs3Label=ProductVersion cs10=Network Threat Protection cs10Label=TaskName cs1=Scan.Generic.PortScan.TCP cs1Label=AttackName cs6=TCP cs6Label=AttackedProtocol cs4=2887053442 cs4Label=AttackerIPv4 cs7=25 cs7Label=AttackedPort cs8=2887125841 cs8Label=AttackedIP   Aug 24 09:30:43 101.11.10.01 CEF:0|KasperskyLab|SecurityCenter|13.2.0.1511|GNRL_EV_ATTACK_DETECTED|Network attack detected|4|msg=User: NT AUTHORITY\\SYSTEM (System user)\r\nComponent: Network Threat Protection\r\nResult description: Blocked\r\nName: Scan.Generic.PortScan.TCP\r\nObject: TCP from 101.11.10.01 at 101.11.10.01:42666\r\nObject type: Network packet\r\nObject name: TCP from 101.11.10.01 at 101.11.10.01:42666\r\nAdditional: 101.11.10.01\r\nDatabase release date: 23/8/2022 12:26:00 PM rt=1661304218000 cs9=Workstation cs9Label=GroupName dhost=082HALIM141 dst=101.11.10.01 cs2=KES cs2Label=ProductName cs3=11.0.0.0 cs3Label=ProductVersion cs10=Network Threat Protection cs10Label=TaskName cs1=Scan.Generic.PortScan.TCP cs1Label=AttackName cs6=TCP cs6Label=AttackedProtocol cs4=2887053442 cs4Label=AttackerIPv4 cs7=42666 cs7Label=AttackedPort cs8=2887125841 cs8Label=AttackedIP   I need help to extract the underline value for fields name TCP. Sample:  TCP=101.11.10.01 Please help. Thanks.
HI, So, I have two clustered environments. I want to copy KO from one site to another. They need to have pretty much the same alerts, dashboards, etc. Thanks