All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,   Now it is working. This made the trick: [replicationDenylist] ms_graph = ...TA-microsoft-graph-security-add-on-for-splunk[/\\]bin[/\\]... Thanks
I have two fields (lets say.) AA and BB, I am trying to filter our results where AA and BB = 00 OR 10 using something like this - index="idx-some-index" sourcetype="dbx" source="some.*.source" | whe... See more...
I have two fields (lets say.) AA and BB, I am trying to filter our results where AA and BB = 00 OR 10 using something like this - index="idx-some-index" sourcetype="dbx" source="some.*.source" | where (AA AND BB)== (00 OR 10) But I am getting error as Error in 'where' command: Type checking failed. 'AND' only takes boolean arguments. I have also tried - index="idx-some-index" sourcetype="dbx" source="some.*.source" | where AA =(00 OR 10) AND (BB=(OO OR 10)) But I am getting same error as Type checking failed. 'OR' only takes boolean arguments.   Please help!
@gcusello @bowesmana  We are on splunk cloud and we use workload based management for licenseing i.e SVC . So the query which you are giving is not giving aggregate daily ingest per index for last 7 ... See more...
@gcusello @bowesmana  We are on splunk cloud and we use workload based management for licenseing i.e SVC . So the query which you are giving is not giving aggregate daily ingest per index for last 7 days
It appears the the guardduty logs are collected via cloudwatch which this TA supports (https://splunkbase.splunk.com/app/1876), so this is most likely what you need. I think the old TA's used ot b... See more...
It appears the the guardduty logs are collected via cloudwatch which this TA supports (https://splunkbase.splunk.com/app/1876), so this is most likely what you need. I think the old TA's used ot be seperate and now they have been combined into this one TA. See the different sourectypes - for you its aws:cloudwatchlogs:guardduty https://docs.splunk.com/Documentation/AddOns/released/AWS/DataTypes General info on thie TA https://docs.splunk.com/Documentation/AddOns/released/AWS/Description
Hi Team,   We actually want to send AWS Guard Duty logs to Splunk Cloud so what is the procedure to get it achieved since earlier we had an option i.e. Amazon GuardDuty Add-on for Splunk (https://s... See more...
Hi Team,   We actually want to send AWS Guard Duty logs to Splunk Cloud so what is the procedure to get it achieved since earlier we had an option i.e. Amazon GuardDuty Add-on for Splunk (https://splunkbase.splunk.com/app/3790) which is currently archived so do we have any add-on or app to collect the events and onboard the logs to Splunk. So kindly help to check and update on the same. 
We have integrated the AWS guard duty logs into Splunk through the S3 bucket.  Recently, we have noticed this error in our health check The file extension fileloaction.jsonl is not in a delimited f... See more...
We have integrated the AWS guard duty logs into Splunk through the S3 bucket.  Recently, we have noticed this error in our health check The file extension fileloaction.jsonl is not in a delimited file format.   Suggest me, how I can resolve this?  
Thank you all so far
I have checked the following: splunk list inputstatus /data/syslog/opswat/metadefender/10.x.y.z/2024-04-23-engine.log file position = 176673 file size = 176673 parent = /data/syslog/opswat/metad... See more...
I have checked the following: splunk list inputstatus /data/syslog/opswat/metadefender/10.x.y.z/2024-04-23-engine.log file position = 176673 file size = 176673 parent = /data/syslog/opswat/metadefender/10.x.y.z/*-engine.log percent = 100.00 type = finished reading Checking the file itself more /data/syslog/opswat/metadefender/10.x.y.z/2024-04-23-engine.log | wc 584 11089 176673 And in Splunk there are the following events for 2024-04-23 index=mb_secgw_cdr_tst_logs | stats count 460 I have selected an event from the syslog data and searched all over the index with no success, so probably the event is not indexed. Syslog-ng ist configured to write the log per host per day in separate files. And events are missing at random all over the day. I try to get more information from the HFs    
Hello, I have multiple dashboards where i user loadjob command since it is very useful to recycle big search results that need to be manipulated in other searches. I know that unscheduled jobs have... See more...
Hello, I have multiple dashboards where i user loadjob command since it is very useful to recycle big search results that need to be manipulated in other searches. I know that unscheduled jobs have a default TTL of 10 minutes and I don't want to change their default TTL globally in the system. Why I'm asking this question: if a User access a dashboard with the loadjob command in the code for example at 10AM and then he comes back to the dashboard at 10:15AM I don't want him to see a message about the non existence of the job (like 'Cannot load artifact' or something like this). To prevent this, I want to create a javascript that recovers the job Id that the User search generated and every 9 minutes increments its TTL by 10 minutes. So the javascript will have a loop where it will wait 9 minutes and change the job id TTL over and over again until the User closes the dashboard (only at that moment I'll let the job expire after 10 minutes so I don't overload the disk). So, the question is: if I have a job id X, how can I increment only its TTL by adding 10 minutes to it? Is there an SPL command? And how can I launch it from a Javascript? Thank you very much to anyone who'll help!
Hi Folks, Tried configuring send http request with providing required fields in Splunk Cloud. The request is not send to destination. Can anyone help me on this. Regards Sham
Hi @Splunkerninja , does the search in [Settings > License > License Consuption > last 60 days > divided by index] run? I only copied this search. Ciao. Giuseppe
The first query is not giving me any results. Even i replaced the macro with actualy query it gives zero result.   I basically want the total of daily ingest of each index over 7 days index=_inter... See more...
The first query is not giving me any results. Even i replaced the macro with actualy query it gives zero result.   I basically want the total of daily ingest of each index over 7 days index=_internal [ rest splunk_server=local /services/server/info | return host] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by idx fixedrange=false | join type=outer _time [ search index=_internal [ rest splunk_server=local /services/server/info | return host] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [ eval <>=round('<>'/1024/1024/1024, 3)]
Sometimes you do encounter some extractions are not working as expected and  sometimes logs change, so if encountered, then apply the fix to local config as you have done  otherwise it will get overw... See more...
Sometimes you do encounter some extractions are not working as expected and  sometimes logs change, so if encountered, then apply the fix to local config as you have done  otherwise it will get overwritten with a new version of the TA. AS this is developed by FortiGate, there may be an email you can send them, so they can fix it for the next version, look for the details via the Splunk base or documentation.
Hi all, we collect some json data from a logfile with a universal forwarder. Most times the events were indexed correctly with already extracted fields, but for a few events the fields are not auto... See more...
Hi all, we collect some json data from a logfile with a universal forwarder. Most times the events were indexed correctly with already extracted fields, but for a few events the fields are not automatically extracted.  If i reindex the same events the indexed extraction is also fine. I did not find any entries in splunkd.log that it is not working. Following props.conf is on the Universal fowarder and Heavy Forwarder (maybe someone could explain which parameter is needed on UF and which on HF): [svbz_swapp_task_activity_log] CHARSET=UTF-8 SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true INDEXED_EXTRACTIONS=json KV_MODE=none category=Custom disabled=false pulldown_type=true TIMESTAMP_FIELDS=date_millis TIME_FORMAT=%s%3N following props.conf is on the Searchhead: [svbz_swapp_task_activity_log] KV_MODE=none The first time when it was indexed automatically it looks like:   When i reindex the same Event again to another index it looks fine: In last 7 days it was working correctly for about 32000 event but for 168 events the automatic field extraction was not working. Here is also the example event: {"task_id": 100562, "date_millis": 1713475816310, "year": 2024, "month": 4, "day": 18, "hour": 23, "minute": 30, "second": 16, "action": "start", "step_name": "XXX", "status": "started", "username": "system", "organization": "XXX", "workflow_id": 14909, "workflow_scheme_name": "XXX", "workflow_status": "started", "workflow_date_started": 1713332220965, "workflow_date_finished": null, "escalation_level": 0, "entry_attribute_1": 1711753200000, "entry_attribute_2": "manual_upload", "entry_attribute_3": 226027, "entry_attribute_4": null, "entry_attribute_5": null} Does someone have an idea why it is sometimes working and sometimes not? When i would now change the KV_Mode on search head the fields are shown correctly for these 168 events but for all others the fields are extracted twice. Using spath with same names would extract it only once. What is the best workaround for already indexed events to get proper search results. Thanks and kind regards Kathrin
Hello, We are encountering a problem with the parsing on the fortigate add-on. It does not recognize the devid of our equipment. This fortigate having a serial number starting with FD, it was not ... See more...
Hello, We are encountering a problem with the parsing on the fortigate add-on. It does not recognize the devid of our equipment. This fortigate having a serial number starting with FD, it was not taken into account by the regex. regex: ^.+?devid=\"?F(?:G|W|\dK).+?(?:\s |\,|\,\s)type=\"?(traffic|utm|event|anomaly) From the stanza: [force_sourcetype_fortigate] We updated it on our side, but is this behavior normal? Thanks in advance, Best regards.
It seems there might be a misunderstanding. I'd prefer to steer clear of utilizing the makeresults command. My aim is to pinpoint a particular index (application) within a specific environment and ga... See more...
It seems there might be a misunderstanding. I'd prefer to steer clear of utilizing the makeresults command. My aim is to pinpoint a particular index (application) within a specific environment and gather all events categorized as errors or warnings. Ideally, I'd like these events consolidated into a single location for ease of review. However, not all errors or warnings are pertinent to my needs. Therefore, I'd like to implement a filter mechanism where I can selectively exclude events by inputting a portion of the log message body into a text box. This text input would then be added to a multi-select feature, enabling me to filter out undesired events effectively. I'd then use a token of a multi-select input and use that token in queries I already have.... See the Dashboard I provided you  Thank you in advance
yeah sure i have a lookup called panels.csv , Panels Critical severity vulnerabilities High severity vulnerabilities Vulnerabilities solved Local virtual machines Outdated opera... See more...
yeah sure i have a lookup called panels.csv , Panels Critical severity vulnerabilities High severity vulnerabilities Vulnerabilities solved Local virtual machines Outdated operation systems - Server Outdated operating systems - Endpoint Outdated operating systems - Unknown Defender enrollment status Clients with old Defender patterns Systems not found in patch management database Clients missing critical updates Servers with blacklisted Software Clients with blacklisted Software Total Installed blacklisted Software Blacklisted Software Exceptions i want to display them horizontally , which i was using your given search , but the result is coming in this pattern Blacklisted Software Exceptions Clients missing critical updates Clients with blacklisted Software Clients with old Defender patterns Critical severity vulnerabilities Defender enrollment status High severity vulnerabilities Local virtual machines Outdated operating systems - Endpoint Outdated operating systems - Unknown Outdated operation systems - Server Servers with blacklisted Software Systems not found in patch management database Total Installed blacklisted Software Vulnerabilities solved i want to display it like this but want to have sections of each content just like table
Hi @Splunkerninja, do you want to calcuate the icense consuption or the number of events per index and per day? In the first case see at [Settings > License > License Consuption past 60 days > by I... See more...
Hi @Splunkerninja, do you want to calcuate the icense consuption or the number of events per index and per day? In the first case see at [Settings > License > License Consuption past 60 days > by Index], or run this: index=_internal [`set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by idx fixedrange=false | join type=outer _time [search index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] In the second case, you could try something ike this: index=* | bin span=1d _time | chart count OVER index BY _time Ciao. Giuseppe
so what did you try and what gave you the wrong results This is the basic search index=_internal source=/opt/splunk/var/log/splunk/license_usage.log idx=* st=* | stats sum(b) as bytes by idx | eval... See more...
so what did you try and what gave you the wrong results This is the basic search index=_internal source=/opt/splunk/var/log/splunk/license_usage.log idx=* st=* | stats sum(b) as bytes by idx | eval gb=round(bytes/1024/1024/1024,3)  Run that over the time range you want
There is a table visualisation in Splunk and when you run that command you are getting a table visualisation. Perhaps you can describe your data better, because you are clearly looking for something... See more...
There is a table visualisation in Splunk and when you run that command you are getting a table visualisation. Perhaps you can describe your data better, because you are clearly looking for something different than just panels a b c. Your post describing this  Panels Blacklisted Software Exceptions Clients missing critical updates Clients with blacklisted Software Clients with old Defender patterns Critical severity vulnerabilities Defender enrollment status High severity vulnerabilities Local virtual machines Outdated operating systems - Endpoint Outdated operating systems - Unknown Outdated operation systems - Server Servers with blacklisted Software Systems not found in patch management database Total Installed blacklisted Software Vulnerabilities solved doesn't actually tell me anything useful - can you describe your lookup data, what it contains and give a better description of how you want the data to look in your table.