All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I would like my router/firewall Unifi UDM-SE send his logs to my VM (splunk+ubuntu server). What I have done: - on the proxmox VM no FW (during the test) - on my VM I have two NICs, on... See more...
Hello, I would like my router/firewall Unifi UDM-SE send his logs to my VM (splunk+ubuntu server). What I have done: - on the proxmox VM no FW (during the test) - on my VM I have two NICs, one for the management (network 205) and one for the remote logging location (splunk - network 203 -same as my udm network). - on my VM, ufw is running, I have opened port 9997 and port 514 . - on my UDM SE, I have forwarded the syslog to my remote splunk server (network 203). On the Splunk server, port 514 and 9997 are listening. Until now, no logs appear on my Splunk. How "ufw" is dealing when running two different networks ? How to add the second NIC (network 203) to Splunk ? Ideas ?  
Hi I suppose that this query works for you : index=notable | stats count as alert_num by rule_name | rename rule_name as csearch_label | lookup savedsearches csearch_label as csearch_label OUTPUTN... See more...
Hi I suppose that this query works for you : index=notable | stats count as alert_num by rule_name | rename rule_name as csearch_label | lookup savedsearches csearch_label as csearch_label OUTPUTNEW action.notable.param.security_domain as security_domain, description, eai:acl.app as app | search app="SplunkEnterpriseSecuritySuite" | table alert_num, csearch_label, app, security_domain, description | sort - alert_num
Hi @brenner , if it isn't in Spunkbase, it isn't! In my experience rarely there are add-ons for these king of devices, I already created a custom add-on for other storages. You have to create a cu... See more...
Hi @brenner , if it isn't in Spunkbase, it isn't! In my experience rarely there are add-ons for these king of devices, I already created a custom add-on for other storages. You have to create a custom add-on. To do this, You could use Splunk Add-On Builder (https://splunkbase.splunk.com/app/2962 ) or SA-CIM Vladiator (https://splunkbase.splunk.com/app/2968 ) or both of them. Ciao. Giuseppe
Hi @simuneer , as I said, if you have a CVE list (e.g. the one from VulDB) you can check the contents of the CVE with your data. Otherwise, you have two solutions: you should identify the pattern... See more...
Hi @simuneer , as I said, if you have a CVE list (e.g. the one from VulDB) you can check the contents of the CVE with your data. Otherwise, you have two solutions: you should identify the pattern to search (e.g. Log4J) in your logs and run a search containing these patternes, have an Asset Inventory and extract from the CVE the device classes to associate the CVE with your assets. As I said, we implemented for a customer a connection with VulDB (it is a paid service), using an app from Spunkbase, and we developed an app to integrate these data with the Customer Asset Inventory. Ciao. Giuseppe
Hi @LearningGuy , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Naa_Win , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @jessieb_83 , in my mind you should follow a different approach: these are the waterfal questions that you need to answer to define what to index: what do I want to monitor? which are the Use... See more...
Hi @jessieb_83 , in my mind you should follow a different approach: these are the waterfal questions that you need to answer to define what to index: what do I want to monitor? which are the Use Cases that I want to implement? Which data are mandatory for my Use Cases? When you define your monitoring perimeter (in terms of devices and data sources) to monitor you can implement the filters on your data to index only the data that are required for your Use Cases. If you're speaking of Security Monitoring, you could use the Spunk Security Essentials App (https://splunkbase.splunk.com/app/3435) to define your Use Cases and the mandatory data for them. Ciao. Giuseppe
Nothing in internal logs which could signify problem. Currently on older version of Splunk enterprise version  Version:7.2.8 Build:d613a50d43ac
I've been fighting this for a week and just spinning in circles. I'm building a new distributed environment in a lab to prep for live deployment.  All is RHEL 8, using Splunk 9.2. 2 indexers, 3 SH'... See more...
I've been fighting this for a week and just spinning in circles. I'm building a new distributed environment in a lab to prep for live deployment.  All is RHEL 8, using Splunk 9.2. 2 indexers, 3 SH's, cluster manager, deployment manager, 2 forwarders. Everything is "working" I just need to tune it now. The indexers are cranking out 700,000 logs per hour, and it's 90% coming off audit.log; the indexers processing the logs in and out of buckets. We have a requirement to monitor audit.log at large, but do not have a requirement for it to index what the buckets are doing. I've been looking at different approaches to this, but I would imagine I'm not the first person to encounter this. Would it be better to tune audit.rules from the linux side? Black list some keywords in the indexers inputs.conf? Tuning through props.conf? Would really appreciate some advice on this one. Thanks!  
I think using eventstats can get you the desired output you are looking for if I am interpreting your question correctly.     <base_search> | eventstats sum(eval(case('ProductCategory'... See more...
I think using eventstats can get you the desired output you are looking for if I am interpreting your question correctly.     <base_search> | eventstats sum(eval(case('ProductCategory'=="productcat1", 'Sales Total'))) as productcat1, sum(eval(case('ProductCategory'=="productcat2", 'Sales Total'))) as productcat2   Or for a more dynamic approach something like this may work.   <base_search> | eventstats sum("Sales Total") as overall_sales by ProductCategory | eval overall_sales_json=json_object("fieldname", 'ProductCategory', "value", 'overall_sales') | eventstats values(overall_sales_json) as overall_sales_json | foreach mode=multivalue overall_sales_json [ | eval fieldname=spath('<<ITEM>>', "fieldname"), field_value=spath('<<ITEM>>', "value"), combined_json=if( isnull(combined_json), json_object(fieldname, field_value), json_set(combined_json, fieldname, field_value) ) ] | fromjson combined_json prefix=dynamic_ | fields - combined_json, overall_sales_json, fieldname, field_value, overall_sales ``` Below code is if you only want the new fields on the first row ``` | streamstats count as line_number | foreach dynamic_* [ | eval <<FIELD>>=if( 'line_number'==1, '<<FIELD>>', null() ) ] | fields - line_number | rename dynamic_* as *    
I'm very new to metrics data in Splunk, I have a question regarding the what is plugin_instance and how can i get the values of it. I'm trying to get the results for the query but end up with no re... See more...
I'm very new to metrics data in Splunk, I have a question regarding the what is plugin_instance and how can i get the values of it. I'm trying to get the results for the query but end up with no results.  | mstats avg("processes.actions.ps_cputime.syst") prestats=true WHERE `github_collectd` host="*" span=10s BY plugin_instance
TLS is needed. Most common reason for not starting is expired cert. You just need to check this and replace it with valid one. But you should see that issue on mongodb.log. Have you checked that also... See more...
TLS is needed. Most common reason for not starting is expired cert. You just need to check this and replace it with valid one. But you should see that issue on mongodb.log. Have you checked that also mongodb engine’s have updated to a new one? Also version number has updated. Have you started splunk after each version updates so splunk can do needed migrations? With SHC you must do manual migration for it.
Hi Cummunity team,  I have a complex query to gather the data below, but a new request came up, it was asked to me to add in the report email subject the product category totals by Category. with ... See more...
Hi Cummunity team,  I have a complex query to gather the data below, but a new request came up, it was asked to me to add in the report email subject the product category totals by Category. with the $result.productcat1$ and $result.productcat2$ I could apprach that, but the way I'm calculating the totals I'm not getting the expected numbers, because I'm appeding the columns from a subquery and transposing the values with  xyseries. Could you please suggest how can I sum(Sales Total) by productcat1 and productcat2 in a new field but keeping the same output as I have now?,  e.g.: something like if ProducCategory="productcat1"; then  productcat1=productcat1+SalesTotal, else productcat2=productcat2+SalesTotal ``` But Print the original output ```   Consider productcat1 and productcat2 are fixed values.  ENV ProducCategory ProductName SalesCondition SalesTotal productcat1 productcat2 prod productcat1 productR blabla 9 152 160 prod productcat1 productj blabla 8     prod productcat1 productc blabla 33     prod productcat2 productx blabla 77     prod productcat2 productpp blabla 89     prod productcat2 productRr blabla 11     prod productcat1 productRs blabla 6     prod productcat1 productRd blabla 43     prod productcat1 productRq blabla 55     Thanks in advance.
Is there a TA for HPE 3PAR data? I have the logs ingested and would like to use an existing TA to normalize the data, but I haven't found one in Splunkbase or elsewhere online.
When using the Splunk Logging Driver for Docker, you can leverage SPLUNK_LOGGING_DRIVER_BUFFER_MAX to set the maximum number of messages held in buffer for retries. The default is 10 * 1000 but can a... See more...
When using the Splunk Logging Driver for Docker, you can leverage SPLUNK_LOGGING_DRIVER_BUFFER_MAX to set the maximum number of messages held in buffer for retries. The default is 10 * 1000 but can anyone confirm the maximum value that can be set?
I don't think this is exactly it but it may lead you to the right path   | rest /services/datamodel/model |search eai:appName=search | table updated   The updated field shows when the model w... See more...
I don't think this is exactly it but it may lead you to the right path   | rest /services/datamodel/model |search eai:appName=search | table updated   The updated field shows when the model was last updated. 
Hello All,   I have searched high and low to try to discover why the kvstore process will not start. This system was upgraded from Splunk 8.0, to 8.2, and finally 9.2.1. I have looked in mongod.lo... See more...
Hello All,   I have searched high and low to try to discover why the kvstore process will not start. This system was upgraded from Splunk 8.0, to 8.2, and finally 9.2.1. I have looked in mongod.log and splunkd.log, but do not really see any thing that helps resolve the issue. Is ssl required for this? the - is there a way to set a correct ssl config, or disable it in the server.conf file? Would the failure of the KVstore process affect IOWAIT? I am running on Oracle Linux, ver 7.9 - I am open to any suggestions. Thanks ewholz
Yep, you'll have to have separated calls for that. Filters on SOAR REST can be appended but they will work as "AND" condition as "OR" is not supported in that sense as a limitation on Django queryset... See more...
Yep, you'll have to have separated calls for that. Filters on SOAR REST can be appended but they will work as "AND" condition as "OR" is not supported in that sense as a limitation on Django querysets So the easiest way would be to combine the results of https://<your_soar_instance>/rest/container?_filter_name__icontains="computer" and the ones from https://<your_soar_instance>/rest/container?_filter_name__icontains="process" and finally process then accordingly.  
Hi @danspav , Thank you for your response. I made the changes and when I clicked on the hyperlink, it is not redirecting to the correct dynamically generated external URL 'https://abc12345.apps.dyna... See more...
Hi @danspav , Thank you for your response. I made the changes and when I clicked on the hyperlink, it is not redirecting to the correct dynamically generated external URL 'https://abc12345.apps.dynatrace.com/ui/apps/dynatrace.classic.distributed.traces/ui/services/SERVICE-ABC12345678AB1A1/purepaths?servicefilter=0%1E9%11SERVICE_METHOD-12345ABC1234A123%14abc%100%111340861000%144611686018427387&gtf=c_1716990969058_1716991269058&gf=all'. Here are the screenshots and code below. Please assist on this.     "visualizations": {         "viz_aBCd123": {             "type": "splunk.table",             "options": {                 "count": 5000,                 "dataOverlayMode": "none",                 "drilldown": "none",                 "backgroundColor": "#FAF9F6",                 "tableFormat": {                     "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByBackgroundColor)",                     "headerBackgroundColor": "> backgroundColor | setColorChannel(tableHeaderBackgroundColorConfig)",                     "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)"                 },                 "eventHandlers": [                     {                         "type": "drilldown.customUrl",                         "options": {                             "url": "$row.URL.value|n$",                             "newTab": true                         }                     }                 ],
A legend. Thank you for making that clear!