All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you so much for your prompt response
Hello,  i get in Splunk every 7 days a vulnerability scan log from all Hosts in our infrastructure - in the future the scan  should be everyday . Now i want to filter which of the vulnerability fin... See more...
Hello,  i get in Splunk every 7 days a vulnerability scan log from all Hosts in our infrastructure - in the future the scan  should be everyday . Now i want to filter which of the vulnerability findings are really new and which one is equal to last scan because they are not new anymore and have a reason that they are still in the filter and they should be excluded in the search output.  If they scan output are the same the CVE number and the message is  identical only the date is different.  My output should look like that i see only event scan messages when they are only on time in the logs. When a scan log is the same (CVE Number) two times in the log it should not be showed in the output. The best thing would be when i can see in the statistics field which of the extracted_Host are new or in the logs. Right now my filter is like this:  I can see in the statistics which of the extracted Host are new with the CVE number but i see in the main Event logs equal logs which are not new anymore. I tried with dedup but thats only deleting the old event logs field value and i can exclude the old event log but the newest is still here. index=nessus Risk=Critical | stats count as event_count by CVE, extracted_Host | where event_count=1 | rename extracted_Host as Host | table CVE, Host Thanks for the Help  
Hi @maede_yavari .. Yes, Splunk 9.1 is compatible with RHEL 9.  https://docs.splunk.com/Documentation/Splunk/9.1.0/Installation/Systemrequirements  
its looking very good! thank you. I just dont understand the calculate results.  for example: in valueCount I have 294723 from the total which is 1360007 should be ≈ 21.67% but in Pct field value I... See more...
its looking very good! thank you. I just dont understand the calculate results.  for example: in valueCount I have 294723 from the total which is 1360007 should be ≈ 21.67% but in Pct field value I have 0.33, Do you know why? all my results in Pct are not correct. 
Thanks so much for your feedback, I will go to try the solution.
These are the sample parameters for index, host, source index="production" host="abc.com-i-1234" source="Log-*-3333-abc4j.log" Suppose there are three Splunk queries as shown below: ------------... See more...
These are the sample parameters for index, host, source index="production" host="abc.com-i-1234" source="Log-*-3333-abc4j.log" Suppose there are three Splunk queries as shown below: ---------------------------------------- Query 1: index="production" host="abc.com-*" source="Log-*" | eval ID=substr(host,9,7) | dedup ID| table ID Suppose it gives output as : ID i-1234 i-5678 i-9123 i-4567   ------------------------------ Query 2: index="production" host="abc.com-$field2$" source="Log-*-*-abc4j.log" | eval Sub_ID = mvindex(split(source,"-"),2) | dedup Sub_ID | table Sub_ID Suppose it gives output as : Sub_ID 111 222 3333 4444 555 666 7777 8888   where, $field2$ denotes the "ID" generated from Query 1 and each "ID" from Query 1 is mapped to two values of "Sub_ID" generated from Query 2. E.g if the query was- index="production" host="abc.com-i-1234" source="Log-*-*-abc4j.log" | eval Sub_ID = mvindex(split(source,"-"),2) | dedup Sub_ID | table Sub_ID it will give output as: Sub_ID 111 222 ------------------------------------------- Query 3: index="production" host="abc.com-$field2$" source="Log-*-$field3$-log4j.log" | dedup RP_Remote_User | table RP_Remote_User | stats count as events Suppose it gives output as : events: 52 where, $field2$ denotes the "ID" generated from query 1 and $field3$ denotes the "Sub_ID" generated from Query 2 E.g if the query was- index="production" host="abc.com-i-1234" source="Log-*-3333-log4j.log" | dedup RP_Remote_User | table RP_Remote_User | stats count as events it will give output as: (on the basis of "ID" : i-1234 and "Sub_ID":3333) events: 52 --------------------------------------- Could you please help me with the Splunk query to generate the output in tabular format as below (count of events corresponding to each ID and its Sub_ID) with the help of above mentioned three queries: ID Sub_ID Events i-1234 111 38   222 48 i-5678 3333 52   4444 45 i-9123 555 23   666 34 i-4567 7777 12   8888 29
Hi, you can use the following API GET call  https://<controller url>/controller/restui/user/account
I have reverted back yo using UDP, and how everything is back to normal again.
Does Splunk 9.1 completely compatible with RHEL 9? I need to know which version of Splunk with which version of RHEL does completely compatible? and support all features? As I know RHEL 9 uses kern... See more...
Does Splunk 9.1 completely compatible with RHEL 9? I need to know which version of Splunk with which version of RHEL does completely compatible? and support all features? As I know RHEL 9 uses kernel 5.14.0. does Splunk completely compatible with this version?
Hi @muradgh  I had the same issue. After you use the UDP port to delivery Fortigate syslog, was the issue permanently resolved?
Hi guiseppe.   I should have been clearer, yes it is a perfectly valid search - except for the many joins, that I also will rewrite with stats. Yes - now I see it, it is a message template thatis ... See more...
Hi guiseppe.   I should have been clearer, yes it is a perfectly valid search - except for the many joins, that I also will rewrite with stats. Yes - now I see it, it is a message template thatis part of the logging, so the {@fieldname} is just part of the normal search.   Thank you
Hi @dgwann, what re the results executing your searches? Ciao. Giuseppe
Hi @las, I don't know why your search doesn't run, but surely it's a very slow search, having many join command inside it (Splunk isn't a DB and join command can be used only when there isn't any ot... See more...
Hi @las, I don't know why your search doesn't run, but surely it's a very slow search, having many join command inside it (Splunk isn't a DB and join command can be used only when there isn't any other solution and with few events!). Try to use a different approach using stats: index=atp-aes-prod (sourcetype=atp_aes_json SourceContext=RevisionLogger Properties.Url="/api/Document/get-merged-pdf" Properties.IsImpersonated=false) OR (SourceContext=ANS.Platform.Application.Commands.Queries.Selfservice.GenerateMergedPdf.GenerateMergedPdfHandler MessageTemplate="User tries to merge*") OR (SourceContext=ANS.Platform.Integrations.GetOrganized.GoDocumentsService MessageTemplate="Start CombineToPdf method*") OR (SourceContext=ANS.Platform.Domain.Services.Selfservice.Authorization.SelfServiceAuthorizationService MessageTemplate="SelfServiceAuthorizationService took {@elapsedMilliseconds} ms to be constructed for part {@partId}.") | stats values(Properties.Url) AS Url values(Timestamp) AS Timestamp values(Properties.CompanyName) AS CompanyName values(Properties.partId) AS partId values(Properties.documents) AS documents BY CorrelationId Sometimes there also an issue (and probably this is the problem of your original search, using fields with the dot inside, in this case use rename or quotes: index=atp-aes-prod (sourcetype=atp_aes_json SourceContext=RevisionLogger Properties.Url="/api/Document/get-merged-pdf" Properties.IsImpersonated=false) OR (SourceContext=ANS.Platform.Application.Commands.Queries.Selfservice.GenerateMergedPdf.GenerateMergedPdfHandler MessageTemplate="User tries to merge*") OR (SourceContext=ANS.Platform.Integrations.GetOrganized.GoDocumentsService MessageTemplate="Start CombineToPdf method*") OR (SourceContext=ANS.Platform.Domain.Services.Selfservice.Authorization.SelfServiceAuthorizationService MessageTemplate="SelfServiceAuthorizationService took {@elapsedMilliseconds} ms to be constructed for part {@partId}.") | rename Properties.Url AS Url Properties.CompanyName AS CompanyName Properties.partId AS partId Properties.documents AS documents | stats values(Url) AS Url values(Timestamp) AS Timestamp values(CompanyName) AS CompanyName values(partId) AS partId values(documents) AS documents BY CorrelationId Ciao. Giuseppe
Hi.   I have been given a search, that I need some help decifering. index=atp-aes-prod sourcetype=atp_aes_json SourceContext=RevisionLogger Properties.Url="/api/Document/get-merged-pdf" Properties... See more...
Hi.   I have been given a search, that I need some help decifering. index=atp-aes-prod sourcetype=atp_aes_json SourceContext=RevisionLogger Properties.Url="/api/Document/get-merged-pdf" Properties.IsImpersonated=false | join type=inner CorrelationId [search index=atp-aes-prod SourceContext=ANS.Platform.Application.Commands.Queries.Selfservice.GenerateMergedPdf.GenerateMergedPdfHandler MessageTemplate="User tries to merge*"] | join type=inner CorrelationId [search index=atp-aes-prod SourceContext=ANS.Platform.Integrations.GetOrganized.GoDocumentsService MessageTemplate="Start CombineToPdf method*"] | join type=inner CorrelationId [search index=atp-aes-prod SourceContext=ANS.Platform.Domain.Services.Selfservice.Authorization.SelfServiceAuthorizationService MessageTemplate="SelfServiceAuthorizationService took {@elapsedMilliseconds} ms to be constructed for part {@partId}."] | table Properties.Url, Timestamp, Properties.CompanyName, Properties.partId, Properties.documents It does not run on our system and never will, I think it was developed by somebody versed in relational databases. I'm trying to rewrite this search, but I'm slightly baffled by the {@elapsedMilliseconds} and {@partId}. Does anybody know what they are doing?   Kind regards las
Hi ,  I am from the Cisco Internal Eng team. I want to try Appdynamics for my Cisco Product. Can you please guide me on how to get a trial license Thanks udaya
Hello @ITWhisperer , Is the above data is sufficient to resolve this issue. could you please help me in this.
Hi @sateesh250795 .... may we know if you got answer for your question please, thanks. 
I think the bin command never really works with the start or end parameter as documented.  For me, I need a simple behavior like this: For a span of 10: 1 = 1-10 2 = 1-10 10 = 1-10 11 = 11-2... See more...
I think the bin command never really works with the start or end parameter as documented.  For me, I need a simple behavior like this: For a span of 10: 1 = 1-10 2 = 1-10 10 = 1-10 11 = 11-20 19 = 11-20 20 = 11-20 21 = 21-30 So I created a macro called bin2 like this: macro body: eval $data$=$data$-1 | bin span=$span$ $data$ as bucket | eval $data$=$data$+1 | rex field=bucket "^(?<_bin_start>\d+)" | rex field=bucket "\-(?<_bin_end>\d+)$" | eval _bin_start=_bin_start+1 | eval bucket=_bin_start."-"._bin_end | fields - _bin_start, _bin_end marco arguments: data,span   And here is an example query: | makeresults count=1 | eval data = "1,5,9,10,11,19,20,25,29" | makemv data delim="," | mvexpand data | `bin2(data, 10)` | table data, bucket And here is the result: data bucket 1 1-10 5 1-10 9 1-10 10 1-10 11 11-20 19 11-20 20 11-20 25 21-30 29 21-30            
@Manish_Sharma wrote: As in if I search for a term like “Error”, I want to be able to see 10 lines before and after this message. Hi @Manish_Sharma ... by the "10 lines before and after",  i ... See more...
@Manish_Sharma wrote: As in if I search for a term like “Error”, I want to be able to see 10 lines before and after this message. Hi @Manish_Sharma ... by the "10 lines before and after",  i assume you would like to see the 10 logs/events before the "error" log/event.  if so, you can try this step.. expand the "error" log/event... it will have a field "_time" with a drop down arrow.  when you click that drop down.. You can find the "Nearby Events".. you can add 5 seconds(or mins or hrs, etc) plus or minus. 
Hello Community, I have a sample data as below: 2023-10-17T17:14:24,436Z client-id=1159222917, transaction-id=522f4012-9737-483c-a3bb-8f23f146da0f [INFO ] [http-nio-9010-exec-3] c.c.a.s.AService Ad... See more...
Hello Community, I have a sample data as below: 2023-10-17T17:14:24,436Z client-id=1159222917, transaction-id=522f4012-9737-483c-a3bb-8f23f146da0f [INFO ] [http-nio-9010-exec-3] c.c.a.s.AService AddressMetrics: {"fieldsToCompare":["addressLine1","city","stateProvince","postalCode","latitude","longitude","taxGeoCode","matchCode","locationCode"],"addressResponseV1":{"addresses":[{"taxGeoCode":"442150950","apartmentLabel":"","matchCode":"S80","city":"EDINBURG","postalCode":"785413355","latitude":"26.307701","houseNumber":"121","stateProvince":"TX","leadingDirectional":"W","streetName":"VAN WEEK","lastLine":"EDINBURG, TX 78541-3355","addressLine1":"121 W VAN WEEK ST","addressLine2":"","streetSuffix":"ST","locationCode":"AP05","trailingDirectional":"","longitude":"-98.162231","apartmentNumber":""}]},"addressRequest":{"clientId":"1159222917","city":"EDINBURG","postalCode":"78541","multiMatch":false,"addressLine1":"121 W VAN WEEK ST","addressLine2":"","state":"TX","sessionId":"1159222917","userId":"1366654994","transactionId":"522f4012-9737-483c-a3bb-8f23f146da0f"},"addressResponseV2":{"addresses":[{"geoResultCode":"S8HPNTSCZA","zipCode":"78541","taxGeoCode":"442150950","matchCode":"S80","city":"EDINBURG","latitude":26.307701,"addressLine1":"121 W VAN WEEK ST","zip4":"3355","addressLine2":"","state":"TX","locationCode":"AP05","longitude":-98.162231}]},"decisionMatrix":{"taxGeoCode":true,"matchCode":true,"city":true,"postalCode":true,"latitude":true,"addressLine1":true,"stateProvince":true,"locationCode":true,"longitude":false}} 2023-10-17T17:14:24,432Z client-id=0122346633, transaction-id=1fde5a12-ee65-4523-bed4-c8dd76cc666b [INFO ] [http-nio-9010-exec-6] c.c.a.s.AService AddressMetrics: {"fieldsToCompare":["addressLine1","city","stateProvince","postalCode","latitude","longitude","taxGeoCode","matchCode","locationCode"],"addressResponseV1":{"addresses":[{"taxGeoCode":"442152020","apartmentLabel":"","matchCode":"S80","city":"MISSION","postalCode":"785741749","latitude":"26.240278","houseNumber":"1004","stateProvince":"TX","leadingDirectional":"E","streetName":"DAWSON","lastLine":"MISSION, TX 78574-1749","addressLine1":"1004 E DAWSON LN","addressLine2":"","streetSuffix":"LN","locationCode":"AP05","trailingDirectional":"","longitude":"-98.310512","apartmentNumber":""}]},"addressRequest":{"clientId":"0122346633","city":"MISSION","postalCode":"78574","multiMatch":false,"addressLine1":"1004 E DAWSON LN","addressLine2":"","state":"TX","sessionId":"0122346633","userId":"0867774533","transactionId":"1fde5a12-ee65-4523-bed4-c8dd76cc666b"},"addressResponseV2":{"addresses":[{"geoResultCode":"S8HPNTSCZA","zipCode":"78574","taxGeoCode":"442152020","matchCode":"S80","city":"MISSION","latitude":26.240278,"addressLine1":"1004 E DAWSON LN","zip4":"1749","addressLine2":"","state":"TX","locationCode":"AP05","longitude":-98.310512}]},"decisionMatrix":{"taxGeoCode":false,"matchCode":true,"city":false,"postalCode":true,"latitude":true,"addressLine1":true,"stateProvince":true,"locationCode":true,"longitude":true}}   what I am trying to achieve here is get the stats of each field within the decisionMatrix object as below: Field TRUE FALSE taxGeoCode 1 1 matchCode 2 0 city 1 1   Any suggestions?