Hi everyone, I have a problem with the line-break in Splunk. I have tried following the methods as in other posts. Here is my props.conf [test1:sec] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)...
See more...
Hi everyone, I have a problem with the line-break in Splunk. I have tried following the methods as in other posts. Here is my props.conf [test1:sec] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=AUTO disabled=false TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%9QZ TIME_PREFIX=<TimeCreated SystemTime=' when I applied this sourcetype in raw windows, it work. but after I finished, it was one event raw windows #line-break
Hi Team, How to write a calculated field for below | eval action=case(like("request.path","auth/ldap/login/names"),"success") Names field will be changeing Above one is not working
Is the current Symantec Bluecoat TA (v3.8.1) compatible with SGOS v7.3.16.2? Has anyone got this to work and provide some insight? After our proxies admins upgraded from 6.7.x to 7.x all the field...
See more...
Is the current Symantec Bluecoat TA (v3.8.1) compatible with SGOS v7.3.16.2? Has anyone got this to work and provide some insight? After our proxies admins upgraded from 6.7.x to 7.x all the field extractions have ceased to work. The release notes says it compatible up to 7.3.6.1. Is their an updated TA that we are not aware of? https://docs.splunk.com/Documentation/AddOns/released/BlueCoatProxySG/Releasenotes Thanks.
index=testindex sourcetype=json source=websource
| timechart span=1h count by JobType
This is my search query to generate a timechart in Splunk. The 'JobType' field has two different values ...
See more...
index=testindex sourcetype=json source=websource
| timechart span=1h count by JobType
This is my search query to generate a timechart in Splunk. The 'JobType' field has two different values for the field, which are 'Completed' and 'Started'. The timeframe between when a job is Completed and before the next Started event happens, there are no jobs running, so I need to create a new event called 'Not Running' to illustrate when there are no jobs running. However, the time between when a job is Started and a job is Completed needs to be called 'Running' because the time period between these two events is when there are jobs running. I need to visualize these events in a timechart. Example - there is a job that completes on 01/06/2024 at 17:00 (Completed). The next job starts on 01/06/2024 at 20:00 (Started). In this timeframe between 17:00 and 20:00 on 01/06/2024, it is in a state of 'Not Running'. I do not want to capture individual jobs. I want to capture all the jobs. The main values I want to illustrate in the timechart is when there are 'Not Running' and 'Running events so basically I want to illustrate the gaps between the 'Started' and 'Completed' events accordingly. I am stuck with this so it would be awesome if I can get some help for this. Thank you.
The Splunk App for Windows Infrastructure 2.0.4 is EOL and the App itself has been archived. I tried to download this App because I do want to reuse some of the dashboard available in this app but I ...
See more...
The Splunk App for Windows Infrastructure 2.0.4 is EOL and the App itself has been archived. I tried to download this App because I do want to reuse some of the dashboard available in this app but I am unable to and I get the message: "This app restricts downloads to a defined list of users. Your user profile was not found in the list of authorized users." Is there a way around this? How can I get hold of the Splunk App for Windows Infrastructure 2.0.4? Kind regards, Jos
Hi, this is probably a product related question. I have a requirement to monitor EDI files (834 - Enrolment file in Healthcare terms) end to end. I would like to see number of EDI files received, pro...
See more...
Hi, this is probably a product related question. I have a requirement to monitor EDI files (834 - Enrolment file in Healthcare terms) end to end. I would like to see number of EDI files received, processed and saved, analyse the file processing failures. Which Splunk product(s) best suits my need?
Hello, I need help with the following scenario: Let's say I have a log source with browser traffic data, one of the available fields is malware_signature I made a lookup table to filter the results...
See more...
Hello, I need help with the following scenario: Let's say I have a log source with browser traffic data, one of the available fields is malware_signature I made a lookup table to filter the results by 10 specific malwares I'd like to be alerted on, all 10 entries have wildcards like so, with another field called classification: malware_signature classification *mimikatz* high when I use inputlookup to filter the results it works well, but no matter what I tried I can't get the "classification" field to be added works well for filtering: [| inputlookup malware_list.csv | fields malware_signature] classification field won't show: [| inputlookup malware_list.csv | fields malware_signature classification] Doesn't work: [| inputlookup malware_list.csv | fields malware_signature] | lookup malware_list.csv malware_signature OUTPUT classification Clarification: I use inputlookup for filtering the results to the logs I want to see by the malware_signature After that I want to enrich the table with the classification field, but using the lookup command it won't catch the malware_signature with the wildcards.
Hello follow Splunkers! We want to ingest Oracle Fusion Application (SaaS) audit logs into Splunk on-prem, and the only way to do this is through the REST API GET method. So, now that I cannot find ...
See more...
Hello follow Splunkers! We want to ingest Oracle Fusion Application (SaaS) audit logs into Splunk on-prem, and the only way to do this is through the REST API GET method. So, now that I cannot find a REST input option in Splunk or any free add-on from Splunk for this task, all I have read over the internet is to develop a script. I need your support to share a sample Python script that should not only pull the logs but also avoid duplicate logs with every pull. Thanks in advance!
Hello, I would like my router/firewall Unifi UDM-SE send his logs to my VM (splunk+ubuntu server). What I have done: - on the proxmox VM no FW (during the test) - on my VM I have two NICs, on...
See more...
Hello, I would like my router/firewall Unifi UDM-SE send his logs to my VM (splunk+ubuntu server). What I have done: - on the proxmox VM no FW (during the test) - on my VM I have two NICs, one for the management (network 205) and one for the remote logging location (splunk - network 203 -same as my udm network). - on my VM, ufw is running, I have opened port 9997 and port 514 . - on my UDM SE, I have forwarded the syslog to my remote splunk server (network 203). On the Splunk server, port 514 and 9997 are listening. Until now, no logs appear on my Splunk. How "ufw" is dealing when running two different networks ? How to add the second NIC (network 203) to Splunk ? Ideas ?
I've been fighting this for a week and just spinning in circles. I'm building a new distributed environment in a lab to prep for live deployment. All is RHEL 8, using Splunk 9.2. 2 indexers, 3 SH'...
See more...
I've been fighting this for a week and just spinning in circles. I'm building a new distributed environment in a lab to prep for live deployment. All is RHEL 8, using Splunk 9.2. 2 indexers, 3 SH's, cluster manager, deployment manager, 2 forwarders. Everything is "working" I just need to tune it now. The indexers are cranking out 700,000 logs per hour, and it's 90% coming off audit.log; the indexers processing the logs in and out of buckets. We have a requirement to monitor audit.log at large, but do not have a requirement for it to index what the buckets are doing. I've been looking at different approaches to this, but I would imagine I'm not the first person to encounter this. Would it be better to tune audit.rules from the linux side? Black list some keywords in the indexers inputs.conf? Tuning through props.conf? Would really appreciate some advice on this one. Thanks!
I'm very new to metrics data in Splunk, I have a question regarding the what is plugin_instance and how can i get the values of it.
I'm trying to get the results for the query but end up with no re...
See more...
I'm very new to metrics data in Splunk, I have a question regarding the what is plugin_instance and how can i get the values of it.
I'm trying to get the results for the query but end up with no results.
| mstats avg("processes.actions.ps_cputime.syst") prestats=true WHERE `github_collectd` host="*" span=10s BY plugin_instance
Hi Cummunity team, I have a complex query to gather the data below, but a new request came up, it was asked to me to add in the report email subject the product category totals by Category. with ...
See more...
Hi Cummunity team, I have a complex query to gather the data below, but a new request came up, it was asked to me to add in the report email subject the product category totals by Category. with the $result.productcat1$ and $result.productcat2$ I could apprach that, but the way I'm calculating the totals I'm not getting the expected numbers, because I'm appeding the columns from a subquery and transposing the values with xyseries. Could you please suggest how can I sum(Sales Total) by productcat1 and productcat2 in a new field but keeping the same output as I have now?, e.g.: something like if ProducCategory="productcat1"; then productcat1=productcat1+SalesTotal, else productcat2=productcat2+SalesTotal ``` But Print the original output ``` Consider productcat1 and productcat2 are fixed values. ENV ProducCategory ProductName SalesCondition SalesTotal productcat1 productcat2 prod productcat1 productR blabla 9 152 160 prod productcat1 productj blabla 8 prod productcat1 productc blabla 33 prod productcat2 productx blabla 77 prod productcat2 productpp blabla 89 prod productcat2 productRr blabla 11 prod productcat1 productRs blabla 6 prod productcat1 productRd blabla 43 prod productcat1 productRq blabla 55 Thanks in advance.
Is there a TA for HPE 3PAR data? I have the logs ingested and would like to use an existing TA to normalize the data, but I haven't found one in Splunkbase or elsewhere online.
When using the Splunk Logging Driver for Docker, you can leverage SPLUNK_LOGGING_DRIVER_BUFFER_MAX to set the maximum number of messages held in buffer for retries. The default is 10 * 1000 but can a...
See more...
When using the Splunk Logging Driver for Docker, you can leverage SPLUNK_LOGGING_DRIVER_BUFFER_MAX to set the maximum number of messages held in buffer for retries. The default is 10 * 1000 but can anyone confirm the maximum value that can be set?
Hello All, I have searched high and low to try to discover why the kvstore process will not start. This system was upgraded from Splunk 8.0, to 8.2, and finally 9.2.1. I have looked in mongod.lo...
See more...
Hello All, I have searched high and low to try to discover why the kvstore process will not start. This system was upgraded from Splunk 8.0, to 8.2, and finally 9.2.1. I have looked in mongod.log and splunkd.log, but do not really see any thing that helps resolve the issue. Is ssl required for this? the - is there a way to set a correct ssl config, or disable it in the server.conf file? Would the failure of the KVstore process affect IOWAIT? I am running on Oracle Linux, ver 7.9 - I am open to any suggestions. Thanks ewholz
Hey all, wondering if anyone has solved this problem before. Looking at potential for taking a Splunk Cloud alert and using it to connect to Ansible Automation Platform to launch a template. Have loo...
See more...
Hey all, wondering if anyone has solved this problem before. Looking at potential for taking a Splunk Cloud alert and using it to connect to Ansible Automation Platform to launch a template. Have looked into the webhooks however AAP is only configured to allow Github and GitLab webhooks on templates it seems, and when attempting to post to the API endpoint to launch the template it would sit there and eventually time out. Wondering if anyone has explored this space before and if there are any suggestions on how to get this connection working.
#machinelearning Hello, I am using dist=auto in my Density function and I am getting negative Beta Results. I feel like this is wrong but keep me honest, I would like to understand how Beta distrib...
See more...
#machinelearning Hello, I am using dist=auto in my Density function and I am getting negative Beta Results. I feel like this is wrong but keep me honest, I would like to understand how Beta distribution is captured and why the mean is a negative result if I am using 0 to 100% success rate? other distribution I am happy with it (e.g Gaussian KDE and Normal) |fit DensityFunction MyModelSuccessRate by "HourOfDay,Object" into MyModel2 dist="auto" Thanks, Joseph
We have a query where we are getting the count by site. index=test-index |stats count by host site. When we run this query in search head cluster we are getting output as site ...
See more...
We have a query where we are getting the count by site. index=test-index |stats count by host site. When we run this query in search head cluster we are getting output as site host undefined appdtz undefined appstd undefined apprtg undefined appthf When we run the same query in deployer we are getting output correctly with site. site host sitea appdtz sitea appstd siteb apprtg siteb appthf how to fix this issue in SH cluster.
Hi all. I'm trying to understand how to map my diagnostic setting AAD data coming in from an mscs:azure:eventhub sourcetype to CIM. I notice in the official docs for the TA, it mentions that th...
See more...
Hi all. I'm trying to understand how to map my diagnostic setting AAD data coming in from an mscs:azure:eventhub sourcetype to CIM. I notice in the official docs for the TA, it mentions that the sourcetype above isn't mapped to CIM, however the azure:monitor:aad is mapped to CIM. I'm attempting to leverage Enterprise Security to build searches off of some UserRiskEvents data coming in, and would like to be able to reference datamodels. So, is there any world I can take my existing data and transform it to match what's mapped to CIM? I envision like other TA's, that this can filter down to unique sourcetypes upon ingestion, while the Inputs on the IDM is set to a parent sourcetype. I can't confirm if that's true or not.
Hello, I've a couple of detailed dashboards, all indicating the health status of my systems. Instead of opening each detailed dashboard and looking at every graph, I would like to have one "Overview...
See more...
Hello, I've a couple of detailed dashboards, all indicating the health status of my systems. Instead of opening each detailed dashboard and looking at every graph, I would like to have one "Overview Dashboard" traffic light indication style. If one error would be shown in a detailed dashboard, I woud like to have the traffic light at the overview dashboard turn red with the option to have the drilldown link to the ´detailed dasboard where the error was found. Any good ideas how one would build something like that? I've one solution, but it seems to be complicated. I would leverage scheduled searches which write into different lookups. The overview dashboard could read from those lookups and search for error codes.