All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @BRFZ  yes, see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Indexesconf anyway, frozenTimeperiodInSecs is the all retention time (Hot+Warm+Cold), maxHotSpanSecs is the Hot+Warm p... See more...
Hi @BRFZ  yes, see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Indexesconf anyway, frozenTimeperiodInSecs is the all retention time (Hot+Warm+Cold), maxHotSpanSecs is the Hot+Warm period. Ciao. Giuseppe
That perfectly resolved my problem. Many thanks!!!
The best way forward here is to set up a new community account using your new email address, then contact community support and ask them to transfer your badges etc. from your old account to your new... See more...
The best way forward here is to set up a new community account using your new email address, then contact community support and ask them to transfer your badges etc. from your old account to your new account. There isn't an easy self-service way to do this (afaik).
Hello, Is it possible to define the retention duration of logs (hot, warm and cold)  If yes, how can this be done ? Or do we only have the option to define the frozenTimePeriodInSecs ?
sorry by bad....that worked   thanks so much
that didn't seem to do anything, I am trying to sort the columns in order not the rows
Have you tried | table endpointOS * That should sort them in alphabetical order, which might be enough here.
Is that a field in Splunk that is a string.  You can do this by swapping the characters around - for your first example   | eval date=replace(date, "(\d{4})-(\d{2})", "\2-\1")   and your second ... See more...
Is that a field in Splunk that is a string.  You can do this by swapping the characters around - for your first example   | eval date=replace(date, "(\d{4})-(\d{2})", "\2-\1")   and your second   | eval date=replace(date, "(\d{4})\/Q(\d)", "Q\2/\1")   where your data field is called date
Hey not sure if anyone can help, trying to sort the columns in numerical order?   thanks in advance
P_vandereerden's reply is a good starting point, but there are two things to consider 1. The use of a subsearch to constrain an outer search may not perform well if there are a large number of reque... See more...
P_vandereerden's reply is a good starting point, but there are two things to consider 1. The use of a subsearch to constrain an outer search may not perform well if there are a large number of requests ids with that log line. If you are expecting a large number of hits for "log_line" then you may need to consider a different approach. 2. The use of transaction has limitations and although it has use cases, it's options should be understood in relation to your data set, particularly when your data set is large. Very often the stats command can be used to achieve the same thing as transaction without the limitations, so it very much depends on what you want to do with the resultant grouped data. For example this is generally a simple replacement for transaction | stats values(_raw) as _raw range(_time) as duration count by requestId which will give you the raw events, the duration from first to last and the number of events for any given request id.  
Same for First Name and Last Name (under Personal Information): Any changes made here will be reversed after the next login.
hello I have a date 2024-06 how can i convert it to 06/2024? and 2023/Q4 to  Q4/2023
Use the first search as a subsearch:     index=myapp [ search index=myapp "some log line" | rex field=_raw "req_id=(?<req_id>[a-zA-Z0-9]*)" | table req_id ] | transaction req_id        
Hi @akgmail , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @akgmail , this seems to be a different question even if on a similar topèic. Anyway, to do calculations between dates, you have always to transforms then in epochtime (when they just aren't in ... See more...
Hi @akgmail , this seems to be a different question even if on a similar topèic. Anyway, to do calculations between dates, you have always to transforms then in epochtime (when they just aren't in thi s format) and then you have numbers that you can use for all your operations. If you don't like the format of the duration, you can create your own function to display a duration in the format you like making mathematic operations, so if you want to have a duration in hours, you have to divide the diff number (that are seconds) for 3600. | eval diff_in_hours=round(now()-_time)/3600,2) then you don't need to rename now() and _time. Ciao. Giuseppe
My application is a backend web service. All events in a request contain the same value for a "req_id" field. I have a use-case, where I want to look at all the events that occurred for requests, on... See more...
My application is a backend web service. All events in a request contain the same value for a "req_id" field. I have a use-case, where I want to look at all the events that occurred for requests, only when a particular log line is present. My first query would be this -   index="myapp" AND "some log line" | rex field=_raw "req_id=(?<req_id>[a-zA-Z0-9]*)"   And then my second query would be -   index="myapp" AND "$req_id" | transaction req_id   where the $req_id would be fed in by the first query. How do I join these two queries?  Thanks in advance!
@inventsekar We have onboarded the data coming from csv files in inputs.conf as below and the data is loaded every day as new csv is created with date stamp. [monitor:<path>/file_*.csv] disabled =... See more...
@inventsekar We have onboarded the data coming from csv files in inputs.conf as below and the data is loaded every day as new csv is created with date stamp. [monitor:<path>/file_*.csv] disabled = false sourcetype = <sourcetype> index=<index>   With this config, we are getting the data into splunk and each row in csv is loaded as separate event. Query:  index=<index> sourcetype=<sourcetype>.   All we need is to see the data similar to csv ie. we need to have a single line header and corresponding data for those columns (just how we see when we load the csv from Add inputs via Splunk GUI).    As of now, events are like as shown below and it repeats every day for new csv files   6/17/24 3:07:26.000 AM   col1,col2,col3,col4,col5,col6   host = <host> source =<source> sourcetype =<sourcetype>  6/17/24 3:07:26.000 AM   data1,data2,data3,data4,data5,data6 host = <host> source =<source> sourcetype =<sourcetype>    We need the output in below format when we run query: col1 col2 col3 col4 col5 col6 data1 data2 data3 data4 data5 data6   Regards, Sid
@gcusello  Thanks for your response this helps. I am getting diff in the string format example  00:01:12 --> This say 1 hour and 12 mins 30+03:46:11--> This say  30 days and 3 hours 46 mins  ... See more...
@gcusello  Thanks for your response this helps. I am getting diff in the string format example  00:01:12 --> This say 1 hour and 12 mins 30+03:46:11--> This say  30 days and 3 hours 46 mins  I want to convert this diff to number of hours and compare it with a threshold(is a numeric value like 24) when I am trying this it is not giving me correct value. I understand this is due to the fact that "diff" is in string format. Shall I first take the diff in epoch and find the diff and then convert it using strf function? Please assist me on the same. trying query | eval currentEventTime=strftime(_time,"%Y-%m-%d %H:%M:%S"), currentTimeintheServer=strftime(now(),"%Y-%m-%d %H:%M:%S"), test_now=now(), test_time=_time, diff_of_epochtime=(now()-_time), diff=strftime(diff_of_epochtime,"%Y-%m-%d %H:%M:%S"), difforg=tostring(round(diff), "duration")
my SAML Response to Splunk.   <?xml version="1.0" encoding="UTF-8" standalone="no"?><samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" Destination="http://RTNB336:8000/saml/acs" ID=... See more...
my SAML Response to Splunk.   <?xml version="1.0" encoding="UTF-8" standalone="no"?><samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" Destination="http://RTNB336:8000/saml/acs" ID="_4c16f9be1c813c774f2f9111fd5602f6" InResponseTo="RTNB336.21.0882C4AC-681F-4648-AD0F-FDD9F4BE114B" IssueInstant="2024-06-20T01:56:14.199Z" Version="2.0"><saml2:Issuer xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion">http://hive.dreamidaas.com</saml2:Issuer><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/><ds:Reference URI="#_4c16f9be1c813c774f2f9111fd5602f6"><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></ds:Transforms><ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/><ds:DigestValue>Wjlp0IBLeluYep7QMphL/ZBkVsDqxbrFcgSDFiFxQBo=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>Y0Lp7OR2BWIie+F60hJUhNdOLKhWlXnjLyD0Y7Ut1lPIYfL9uoClcQA98Ge961M7FjrC/uDA8yxGYKvApU4VOYzy7kLM0wbxFKUVXAuPAl5of0WWrMV8QMSWfCq8/ensPzlzsqg84tf86UgMZ2PodD6WOM9SIIW+izBPOP3emuv2c+UrvR2eyp1s+ItWn0AUB+0R0l+iqd+sNE/Gb+l9THlJYm68yLr2DY0nT66dOLKS3Q3jnMox6xrzsSnwaF6+H+dSnvd5YeBIMyjTC1bF6GjQpdudTNz8162TvtJjvAcTUOwhUmLyY4ytTvL+lHKOsDh57wZenvB4gVYzoF6T+A==</ds:SignatureValue><ds:KeyInfo><ds:X509Data><ds:X509Certificate>MIIDtDCCApygAwIBAgIKJxHdhEoMRRD/JjANBgkqhkiG9w0BAQsFADBCMQswCQYDVQQGEwJLUjEW MBQGA1UECgwNRHJlYW1TZWN1cml0eTEMMAoGA1UECwwDU1NPMQ0wCwYDVQQDDARST09UMB4XDTIy MDIyMzIzNTY1NFoXDTMyMDIyMzIzNTY1NFowTzELMAkGA1UEBhMCS1IxFjAUBgNVBAoMDURyZWFt U2VjdXJpdHkxDDAKBgNVBAsMA1NTTzEaMBgGA1UEAwwRTUFHSUNfU1NPX0lEUF9TaWcwggEiMA0G CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCGEA5RIOlCH/xJX5qnAQRixJfuUhv2dBoGyCjO1qbJ GuJh6lCF7mwsJbS+PStFrFvXBrfFt8S2QU7hndK5aj3f83IJiiv6y+a26/4xNf19sp6AtafAmWr9 kkI5AH51/9l8ypzf67OAUfrJxFPW6ZKgWiGp5yjrensl1IKxwP0joxUQXISI+epu07XpdWF2SJQ7 rVRNPZUP6sA+lNQsFDznN7moWFcU+UyrTJHDkgj/2qw4QvucNBY7Hj/bC/6KX1d0XSKfvQCfI4gu Gd/4FL1ApnyTvZ/tnbcbl420NWbKgtn19Q4ZIqhj10ruTzVn1YOpwqBGP/NlKDVmKOCem7tvAgMB AAGjgZ4wgZswagYDVR0jBGMwYYAULffLTJtBlWrpR2I1Coc4OG3funyhRqREMEIxCzAJBgNVBAYT AktSMRYwFAYDVQQKDA1EcmVhbVNlY3VyaXR5MQwwCgYDVQQLDANTU08xDTALBgNVBAMMBFJPT1SC AQEwHQYDVR0OBBYEFPvKSaxuZMLnM8ZqaFFkw0xeDp8CMA4GA1UdDwEB/wQEAwIGwDANBgkqhkiG 9w0BAQsFAAOCAQEAflCL2e6ZHxGDtK6PSjrGrhrkJ4XYHvKGnEWWajhL0RqqDoQhEAOAzEyGMcpQ zWBF6etI+uDlnr7EfPCAojvwfcQCEGK4gCHskHHDkXNz5MAC2sSHqVEn/ChAO9nRnTRo4EZlFVgH SXIDJqeWZd2wJ86u9cqA6XTyB/KuVwnTD2U/1W87ERpKlXtDNnC5hB3cp1ONaW+0+Fnn4NdSgMQd SwteL/CtU+q/gcYt1izy1RGdcDRR11+nmfkZT6UYCyKj0ea0yc4SbRjGIEOgJExDJBL8eyc4X2D3 4k6B4rhPzx+vF1OB1esHB69T6Vlo+iUM+XtoLFUOhloNiDzXq+2Hgg==</ds:X509Certificate></ds:X509Data></ds:KeyInfo></ds:Signature><saml2p:Status xmlns:saml2p="urn:oasis:names:tc:SAML:2.0:protocol"><saml2p:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/></saml2p:Status><saml2:Assertion xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" ID="_93ae10442348482eb51b04051c58267a" IssueInstant="2024-06-20T01:56:14.199Z" Version="2.0"><saml2:Issuer>http://hive.dreamidaas.com</saml2:Issuer><saml2:Subject><saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress" NameQualifier="http://hive.dreamidaas.com" SPNameQualifier="RTNB336">rladnrud@devdreamsso.site</saml2:NameID><saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"><saml2:SubjectConfirmationData InResponseTo="RTNB336.21.0882C4AC-681F-4648-AD0F-FDD9F4BE114B" NotOnOrAfter="2024-06-20T02:01:14.199Z" Recipient="http://RTNB336:8000/saml/acs"/></saml2:SubjectConfirmation></saml2:Subject><saml2:Conditions NotBefore="2024-06-20T01:56:14.199Z" NotOnOrAfter="2024-06-20T02:01:14.199Z"><saml2:AudienceRestriction><saml2:Audience>RTNB336</saml2:Audience></saml2:AudienceRestriction></saml2:Conditions><saml2:AuthnStatement AuthnInstant="2024-06-20T01:55:52.000Z" SessionIndex="_8028c81d727dcc5a423afa58c645b8c5"><saml2:AuthnContext><saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:InternetProtocol</saml2:AuthnContextClassRef></saml2:AuthnContext></saml2:AuthnStatement></saml2:Assertion></samlp:Response>   There's no problem in my IDP. I don't know why Splunk can't verify signature properly
It says, "If you save your IdP certificate under $SPLUNK_HOME/etc/auth/idpCerts, please leave it blank." If you don't type idpCert.pem, it won't save it. I don't think that's the cause, but what more... See more...
It says, "If you save your IdP certificate under $SPLUNK_HOME/etc/auth/idpCerts, please leave it blank." If you don't type idpCert.pem, it won't save it. I don't think that's the cause, but what more information do I need to provide to get this error clearer? How should I solve this problem? I looked for the same error information and found that I need to register IdP certificate chains, but that's not a requirement, is it? Splunk-9.2.1 Windows11 Our company IDP