All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The tokens passed in the url need to be constructed from the multi-select input not hard coded
You could start with something like this index=* | fields - _time _raw | foreach * [| eval <<FIELD>>=if("<<FIELD>>"=="index",index,sourcetype)] | table * | fillnull value="N/A" | foreach * [... See more...
You could start with something like this index=* | fields - _time _raw | foreach * [| eval <<FIELD>>=if("<<FIELD>>"=="index",index,sourcetype)] | table * | fillnull value="N/A" | foreach * [eval sourcetype=if("<<FIELD>>"!="sourcetype",if('<<FIELD>>'!="N/A",mvappend(sourcetype,"<<FIELD>>"),sourcetype),sourcetype)] | dedup sourcetype | table sourcetype It may fail due to the amount of data being brought back, so you might want to break it up by index. Also, it works by looking at the fields returned in the events, so if some fields are not used in the time period covered, they will not show up, so you might want to run it a different times of the day, rather than for longer periods.
Hi @somesoni2 , I can`t really get the first search to work, how are the count calculations being performed ? x and y are not integers, so not sure how sum() is going to work ?
Here is an event log output. Its both the same log only with an other date. I see both event logs in the output in splunk but i dont want see one of them if in the search are two same event logs. Mea... See more...
Here is an event log output. Its both the same log only with an other date. I see both event logs in the output in splunk but i dont want see one of them if in the search are two same event logs. Means if i filter for 7 days and there is only one event log with CVE-2023-21554 then i want to see this because its "new" but when i filter for 30 days and then i find two equal eventlogs i dont want to see it in the output because its not new - right now i see it 16/10/2023 04:00:03.000 "175373","CVE-2023-21554","10.0","Critical","10.56.93.133","tcp","1801","Microsoft Message Queuing RCE (CVE-2023-21554, QueueJumper)","A message queuing application is affected a remote code execution vulnerability.","The Microsoft Message Queuing running on the remote host is affected by a remote code execution vulnerability. An unauthenticated remote attacker can exploit this, via a specially crafted message, to execute arbitrary code on the remote host.","Apply updates in accordance with the vendor advisory.","https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-21554 http://www.nessus.org/u?383fb650","Nessus was able to detect the issue by sending a specially crafted message to remote TCP port 1801." CVE = CVE-2023-21554 Risk = Critical extracted_Host = 192.168.0.1 sourcetype = csv 09/10/2023 04:00:03.000 "175373","CVE-2023-21554","10.0","Critical","10.56.93.133","tcp","1801","Microsoft Message Queuing RCE (CVE-2023-21554, QueueJumper)","A message queuing application is affected a remote code execution vulnerability.","The Microsoft Message Queuing running on the remote host is affected by a remote code execution vulnerability. An unauthenticated remote attacker can exploit this, via a specially crafted message, to execute arbitrary code on the remote host.","Apply updates in accordance with the vendor advisory.","https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-21554 http://www.nessus.org/u?383fb650","Nessus was able to detect the issue by sending a specially crafted message to remote TCP port 1801." CVE = CVE-2023-21554 Risk = Critical extracted_Host = 192.168.0.1 sourcetype = csv
Hi @LionSplunk, you should identify the period, using eval. so if you run the scan every day, you could try something like this: index=nessus Risk=Critical | eval period=if(_time<now()-86400,"Last... See more...
Hi @LionSplunk, you should identify the period, using eval. so if you run the scan every day, you could try something like this: index=nessus Risk=Critical | eval period=if(_time<now()-86400,"Last","Previous") | stats dc(period) AS period_count values(period) AS period BY CVE extracted_Host | where period_count=1 AND period="Last" | rename extracted_Host as Host | table CVE Host Ciao. Giuseppe  
Thank you so much for your prompt response
Hello,  i get in Splunk every 7 days a vulnerability scan log from all Hosts in our infrastructure - in the future the scan  should be everyday . Now i want to filter which of the vulnerability fin... See more...
Hello,  i get in Splunk every 7 days a vulnerability scan log from all Hosts in our infrastructure - in the future the scan  should be everyday . Now i want to filter which of the vulnerability findings are really new and which one is equal to last scan because they are not new anymore and have a reason that they are still in the filter and they should be excluded in the search output.  If they scan output are the same the CVE number and the message is  identical only the date is different.  My output should look like that i see only event scan messages when they are only on time in the logs. When a scan log is the same (CVE Number) two times in the log it should not be showed in the output. The best thing would be when i can see in the statistics field which of the extracted_Host are new or in the logs. Right now my filter is like this:  I can see in the statistics which of the extracted Host are new with the CVE number but i see in the main Event logs equal logs which are not new anymore. I tried with dedup but thats only deleting the old event logs field value and i can exclude the old event log but the newest is still here. index=nessus Risk=Critical | stats count as event_count by CVE, extracted_Host | where event_count=1 | rename extracted_Host as Host | table CVE, Host Thanks for the Help  
Hi @maede_yavari .. Yes, Splunk 9.1 is compatible with RHEL 9.  https://docs.splunk.com/Documentation/Splunk/9.1.0/Installation/Systemrequirements  
its looking very good! thank you. I just dont understand the calculate results.  for example: in valueCount I have 294723 from the total which is 1360007 should be ≈ 21.67% but in Pct field value I... See more...
its looking very good! thank you. I just dont understand the calculate results.  for example: in valueCount I have 294723 from the total which is 1360007 should be ≈ 21.67% but in Pct field value I have 0.33, Do you know why? all my results in Pct are not correct. 
Thanks so much for your feedback, I will go to try the solution.
These are the sample parameters for index, host, source index="production" host="abc.com-i-1234" source="Log-*-3333-abc4j.log" Suppose there are three Splunk queries as shown below: ------------... See more...
These are the sample parameters for index, host, source index="production" host="abc.com-i-1234" source="Log-*-3333-abc4j.log" Suppose there are three Splunk queries as shown below: ---------------------------------------- Query 1: index="production" host="abc.com-*" source="Log-*" | eval ID=substr(host,9,7) | dedup ID| table ID Suppose it gives output as : ID i-1234 i-5678 i-9123 i-4567   ------------------------------ Query 2: index="production" host="abc.com-$field2$" source="Log-*-*-abc4j.log" | eval Sub_ID = mvindex(split(source,"-"),2) | dedup Sub_ID | table Sub_ID Suppose it gives output as : Sub_ID 111 222 3333 4444 555 666 7777 8888   where, $field2$ denotes the "ID" generated from Query 1 and each "ID" from Query 1 is mapped to two values of "Sub_ID" generated from Query 2. E.g if the query was- index="production" host="abc.com-i-1234" source="Log-*-*-abc4j.log" | eval Sub_ID = mvindex(split(source,"-"),2) | dedup Sub_ID | table Sub_ID it will give output as: Sub_ID 111 222 ------------------------------------------- Query 3: index="production" host="abc.com-$field2$" source="Log-*-$field3$-log4j.log" | dedup RP_Remote_User | table RP_Remote_User | stats count as events Suppose it gives output as : events: 52 where, $field2$ denotes the "ID" generated from query 1 and $field3$ denotes the "Sub_ID" generated from Query 2 E.g if the query was- index="production" host="abc.com-i-1234" source="Log-*-3333-log4j.log" | dedup RP_Remote_User | table RP_Remote_User | stats count as events it will give output as: (on the basis of "ID" : i-1234 and "Sub_ID":3333) events: 52 --------------------------------------- Could you please help me with the Splunk query to generate the output in tabular format as below (count of events corresponding to each ID and its Sub_ID) with the help of above mentioned three queries: ID Sub_ID Events i-1234 111 38   222 48 i-5678 3333 52   4444 45 i-9123 555 23   666 34 i-4567 7777 12   8888 29
Hi, you can use the following API GET call  https://<controller url>/controller/restui/user/account
I have reverted back yo using UDP, and how everything is back to normal again.
Does Splunk 9.1 completely compatible with RHEL 9? I need to know which version of Splunk with which version of RHEL does completely compatible? and support all features? As I know RHEL 9 uses kern... See more...
Does Splunk 9.1 completely compatible with RHEL 9? I need to know which version of Splunk with which version of RHEL does completely compatible? and support all features? As I know RHEL 9 uses kernel 5.14.0. does Splunk completely compatible with this version?
Hi @muradgh  I had the same issue. After you use the UDP port to delivery Fortigate syslog, was the issue permanently resolved?
Hi guiseppe.   I should have been clearer, yes it is a perfectly valid search - except for the many joins, that I also will rewrite with stats. Yes - now I see it, it is a message template thatis ... See more...
Hi guiseppe.   I should have been clearer, yes it is a perfectly valid search - except for the many joins, that I also will rewrite with stats. Yes - now I see it, it is a message template thatis part of the logging, so the {@fieldname} is just part of the normal search.   Thank you
Hi @dgwann, what re the results executing your searches? Ciao. Giuseppe
Hi @las, I don't know why your search doesn't run, but surely it's a very slow search, having many join command inside it (Splunk isn't a DB and join command can be used only when there isn't any ot... See more...
Hi @las, I don't know why your search doesn't run, but surely it's a very slow search, having many join command inside it (Splunk isn't a DB and join command can be used only when there isn't any other solution and with few events!). Try to use a different approach using stats: index=atp-aes-prod (sourcetype=atp_aes_json SourceContext=RevisionLogger Properties.Url="/api/Document/get-merged-pdf" Properties.IsImpersonated=false) OR (SourceContext=ANS.Platform.Application.Commands.Queries.Selfservice.GenerateMergedPdf.GenerateMergedPdfHandler MessageTemplate="User tries to merge*") OR (SourceContext=ANS.Platform.Integrations.GetOrganized.GoDocumentsService MessageTemplate="Start CombineToPdf method*") OR (SourceContext=ANS.Platform.Domain.Services.Selfservice.Authorization.SelfServiceAuthorizationService MessageTemplate="SelfServiceAuthorizationService took {@elapsedMilliseconds} ms to be constructed for part {@partId}.") | stats values(Properties.Url) AS Url values(Timestamp) AS Timestamp values(Properties.CompanyName) AS CompanyName values(Properties.partId) AS partId values(Properties.documents) AS documents BY CorrelationId Sometimes there also an issue (and probably this is the problem of your original search, using fields with the dot inside, in this case use rename or quotes: index=atp-aes-prod (sourcetype=atp_aes_json SourceContext=RevisionLogger Properties.Url="/api/Document/get-merged-pdf" Properties.IsImpersonated=false) OR (SourceContext=ANS.Platform.Application.Commands.Queries.Selfservice.GenerateMergedPdf.GenerateMergedPdfHandler MessageTemplate="User tries to merge*") OR (SourceContext=ANS.Platform.Integrations.GetOrganized.GoDocumentsService MessageTemplate="Start CombineToPdf method*") OR (SourceContext=ANS.Platform.Domain.Services.Selfservice.Authorization.SelfServiceAuthorizationService MessageTemplate="SelfServiceAuthorizationService took {@elapsedMilliseconds} ms to be constructed for part {@partId}.") | rename Properties.Url AS Url Properties.CompanyName AS CompanyName Properties.partId AS partId Properties.documents AS documents | stats values(Url) AS Url values(Timestamp) AS Timestamp values(CompanyName) AS CompanyName values(partId) AS partId values(documents) AS documents BY CorrelationId Ciao. Giuseppe
Hi.   I have been given a search, that I need some help decifering. index=atp-aes-prod sourcetype=atp_aes_json SourceContext=RevisionLogger Properties.Url="/api/Document/get-merged-pdf" Properties... See more...
Hi.   I have been given a search, that I need some help decifering. index=atp-aes-prod sourcetype=atp_aes_json SourceContext=RevisionLogger Properties.Url="/api/Document/get-merged-pdf" Properties.IsImpersonated=false | join type=inner CorrelationId [search index=atp-aes-prod SourceContext=ANS.Platform.Application.Commands.Queries.Selfservice.GenerateMergedPdf.GenerateMergedPdfHandler MessageTemplate="User tries to merge*"] | join type=inner CorrelationId [search index=atp-aes-prod SourceContext=ANS.Platform.Integrations.GetOrganized.GoDocumentsService MessageTemplate="Start CombineToPdf method*"] | join type=inner CorrelationId [search index=atp-aes-prod SourceContext=ANS.Platform.Domain.Services.Selfservice.Authorization.SelfServiceAuthorizationService MessageTemplate="SelfServiceAuthorizationService took {@elapsedMilliseconds} ms to be constructed for part {@partId}."] | table Properties.Url, Timestamp, Properties.CompanyName, Properties.partId, Properties.documents It does not run on our system and never will, I think it was developed by somebody versed in relational databases. I'm trying to rewrite this search, but I'm slightly baffled by the {@elapsedMilliseconds} and {@partId}. Does anybody know what they are doing?   Kind regards las
Hi ,  I am from the Cisco Internal Eng team. I want to try Appdynamics for my Cisco Product. Can you please guide me on how to get a trial license Thanks udaya