All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @sahilvats, good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @dhana22, there's no problem until the last version of Splunk is certified for Linux Kernel grater than 3.X. Ciao. Giuseppe
Hi @Roy_9 , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @byronrivers, as I said, we use Tenable Nessus/SecurityCenter because we are Tenable partners and we knw very well these solutions (not me, only my colleagues!) so we integrated these solutions t... See more...
Hi @byronrivers, as I said, we use Tenable Nessus/SecurityCenter because we are Tenable partners and we knw very well these solutions (not me, only my colleagues!) so we integrated these solutions taking the scar results in Splunk and displaying them. If you have SecurityCenter, there's an Add-On to take logs, if you have Nessus, you have to create a script that activate scanning and takes results in Splunk. Ciao. Giuseppe
Hi @VijaySrrie, you have to analyze your data sources: some of them aren't correctly configurated. The most common misconfigurations are the following: installed forwarders in an active/active cl... See more...
Hi @VijaySrrie, you have to analyze your data sources: some of them aren't correctly configurated. The most common misconfigurations are the following: installed forwarders in an active/active cluster, receiving syslog using two syslog servers but without a Load Balancer, use the crcSal = <SOURCE> with logs that rotate or tar the old files. the first job should be, using the following search, undertanding of what are the duplicated data source: index="indexname" | stats values(sourcetype) AS sourcetype count by _raw | where count>1 in this way you have the list of duplicated sourcetypes and you can focus your analyis on those sourcetypes. Then, if the duplicated sourcetypes come from a cluster or syslogs, you should analize your architecture to understand eventual duplications. Ciao. Giuseppe
I have defined stacked bar chart in my Splunk Enterprise Dashboard. I've been trying to solve this problem but I cannot solve them.. Please help me out. These are the problems that I encountered:... See more...
I have defined stacked bar chart in my Splunk Enterprise Dashboard. I've been trying to solve this problem but I cannot solve them.. Please help me out. These are the problems that I encountered: 1. As you can see in the line below `| eval PERCENTAGE = round(PERCENTAGE, 1)` I've rounded the data. However, with data such as 7 and 10, ".0" gets dropped off for some reason. Our client strongly wants the data to be labelled in a uniform format. Is there a way to force natural number to appear as a integer form? * 7 &10 (AS-IS) -> 7.0 & 10.0 (TO-BE) 2. Since the color of the Bar is Dark, I wanted to change the label in the bar to be White. I've wrote this line of code : `<option name="charting.fontColor">#ffffff</option>` However, this makes the whole font color including the ones in the axes.    3. I  also want to change the label in the bar to be "Bold" and "Times New Roman" font-family. Is there a way to specify them? This is my current code: ```XML <row> <panel> <chart> <search> <query> index = ~ (TRIMMED OF DUE TO PRIVACY ISSUE) | eval PERCENTAGE = round(PERCENTAGE, 1) | fields DOMAIN MONTHS PERCENTAGE | chart values(PERCENTAGE) over MONTHS by DOMAIN </query> <earliest>earliest</earliest> <latest>latest</latest> </search> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="charting.legend.placement">right</option> <option name="height">200</option> <option name="refresh.display">progressbar</option> <row> ``` Thank you
It would help if you describe what "this is not working" actually means.  What is the result you get and what is the result you expect?  What is the logic between your data and your expected result? ... See more...
It would help if you describe what "this is not working" actually means.  What is the result you get and what is the result you expect?  What is the logic between your data and your expected result? Using your sample data and your sample stats, this is the table clname cores cpu fhost vhost total_vhost_cpus total_fhost_cores clusterx 2 1 f-hosta v-hosta 4 4 clusterx 2 1 f-hosta v-hostb 4 4 clusterx 4 1 f-hostb v-hostc 4 4 clusterx 6 1 f-hostc v-hostd 4 6 Can you explain why this not what you expect?  What is the problem you are trying to solve using two eventstats command with raw events, not a stats?
Can we have any other command to transfer one index data into another index 
Hi, One of use case giving below error while sending email to recipients. The use case configured to run every 20 mins and the use case alert trigger action is send alert and notable.  we can... See more...
Hi, One of use case giving below error while sending email to recipients. The use case configured to run every 20 mins and the use case alert trigger action is send alert and notable.  we cannot see the  results in notable index and we are not receiving the email. If we ran the use case manually we can see the results. I have checked in python logs, nothing found about use case Error message 08-17-2023 03:04:34.681 +0000 ERROR ScriptRunner [6973 AlertNotifierWorker-0] - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/search/bin/sendemail.py "results_link=https://XXxxx-splunk.com/app/SplunkEnterpriseSecuritySuite/@go?sid=scheduler_c29jX2VzX3JlcG9ydA__SplunkEnterpriseSecuritySuite__RMD5851862a85f91df65_at_1692241200_11388_A08142CF-D931-4BE1-ADEA-3D2962FFE6D6" "ssname=xxxxxx- xxxx-xx-xxx -  - Rule" "graceful=True" "trigger_time=1692241474" results_file="/opt/splunk/var/run/splunk/dispatch/scheduler_c29jX2VzX3JlcG9ydA__SplunkEnterpriseSecuritySuite__"': _csv.Error: line contains NUL External search command 'sendemail' returned error code 1.
Hi, I could see duplicate data in splunk by using below query index="indexname" | stats count by _raw | where count >1   I checked 10 different indexes out of which 8 indexes are having dup... See more...
Hi, I could see duplicate data in splunk by using below query index="indexname" | stats count by _raw | where count >1   I checked 10 different indexes out of which 8 indexes are having duplicate logs  
Hi. What I am trying to do is the following... I need what I bring in codes_tech to look for it in columns A,B,C, etc and if it finds something that says "Es aqui" My query is: inde... See more...
Hi. What I am trying to do is the following... I need what I bring in codes_tech to look for it in columns A,B,C, etc and if it finds something that says "Es aqui" My query is: index=notable search_name="Endpoint - KTH*" |fields technique_mitre |stats count by technique_mitre |eval tech_id=technique_mitre |rex field=tech_id "^(?<codes_tech>[^\.]+)" |stats count by codes_tech |makemv delim=", " codes_tech |mvexpand codes_tech |fields codes_tech | inputlookup append=t mitre_lookup | foreach TA00* [ | lookup mitre_tt_lookup technique_id as <<FIELD>> OUTPUT technique_name as <<FIELD>>_technique_name | eval <<FIELD>>_technique_name=mvindex(<<FIELD>>_technique_name, 0) | eval <<FIELD>>=<<FIELD>>_technique_name . " " . <<FIELD>> |eval <<FIELD>>=split(replace(<<FIELD>>,"\.",".|"),"|") ] | eval TA0004 = if(mvfind(codes_tech, TA0004) > -1, TA0001." Es aqui", TA0004)
Hello there, I would like some help with my query. I want to summarize 2 fields into 2 new columns One field is unique, but the other is not The field fhost is not unique. I want the sum o... See more...
Hello there, I would like some help with my query. I want to summarize 2 fields into 2 new columns One field is unique, but the other is not The field fhost is not unique. I want the sum of field "cores" by unique combination of the columns    "clname" and  "fhost" I am struggle how to do this properly and how i can use the sum unique for column "fhost" | makeresults | eval clname="clusterx", fhost="f-hosta", vhost="v-hosta",cores=2,cpu=1 | append [| makeresults | eval clname="clusterx", fhost="f-hosta", vhost="v-hostb" ,cores=2,cpu=1 ] | append [| makeresults | eval clname="clusterx", fhost="f-hostb", vhost="v-hostc" ,cores=4,cpu=1 ] | append [| makeresults | eval clname="clusterx", fhost="f-hostc", vhost="v-hostd" ,cores=6,cpu=1 ] | eventstats sum(cpu) as total_vhost_cpus by clname ``` This is not working ``` | eventstats sum(cores) as total_fhost_cores by clname fhost `` The output should be in table format ``` | table clname cores cpu fhost vhost  total_vhost_cpus  total_fhost_cores Thank you in advance. Harry
@darljed  Is it possible to share your full code? You can mask important fields & values in XML and JS. KV
Hi Team, I am encountering an issue while trying to Enabling the Transaction Analytics part for the Python Application using Python agent. As the stand-alone agent is working fine, however, I have o... See more...
Hi Team, I am encountering an issue while trying to Enabling the Transaction Analytics part for the Python Application using Python agent. As the stand-alone agent is working fine, however, I have observed that there is no analytic-related data reflecting on the controller. I followed the steps from link1  and link2  It is a Django-based framework application and below is the appdynamics.cfg file for the same  [agent] app = Test tier = T1 node = N1 [controller] host = <saas-controller> port = 443 ssl = true account = <saas controller-account> accesskey = <password> [log] dir = path/to/directory level = debug debugging = on [services:analytics] host = https://bom-ana-api.saas.appdynamics.com port = 9090 ssl = true enabled = true for  running the agent I use the command 'pyagent proxy start -c <path/to/appdynamcis.cfg>' Please let me know what maybe done here
Can you find some information around the missing data - e.g. does it transfer a fixed number of events, but miss some others. Are the time stamps the same and is the time window you are searching th... See more...
Can you find some information around the missing data - e.g. does it transfer a fixed number of events, but miss some others. Are the time stamps the same and is the time window you are searching the same on both indexes?    
Hi Olly experts, We have 3 node Kubernetes Cluster node which we want to monitor so I have installed OTEL using Helm 3.0 version and it was deployed successfully but we were not able to find Kubern... See more...
Hi Olly experts, We have 3 node Kubernetes Cluster node which we want to monitor so I have installed OTEL using Helm 3.0 version and it was deployed successfully but we were not able to find Kubernetes data in Olly dashboards. Please suggest how to fine tune the problem. Note: Firewall was disabled on the server. Regards, Eshwar
@forecastingLogs  I think its getting truncating if more data is present. how to index all data without missing events [QualificationTests] AUTO_KV_JSON = DATETIME_CONFIG = CURRENT INDEXED_EXTRA... See more...
@forecastingLogs  I think its getting truncating if more data is present. how to index all data without missing events [QualificationTests] AUTO_KV_JSON = DATETIME_CONFIG = CURRENT INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Database disabled = false pulldown_type = 1 truncate=0 I added it in limits.conf too extraction_cutoff  
I am getting events properly but the total number of A events not matching the B events 
What sort of data is missing? Have you tried using output_format=hec for the collect command  https://docs.splunk.com/Documentation/Splunk/9.0.3/SearchReference/Collect#arg-options  
I am using collect command to transfer data data from one index to another index  The query is like index=A source=sourceA sourcetype:sourcetypeA host=hostA | collect index=B source=sourceA sourcety... See more...
I am using collect command to transfer data data from one index to another index  The query is like index=A source=sourceA sourcetype:sourcetypeA host=hostA | collect index=B source=sourceA sourcetype:sourcetypeA host=hostA. But some data is missing why