All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Bisho-Fouad , every Splunk system can have two kinds of users: local users: manually created on the system, users from the external authentication system: they depends on the Auth system. A... See more...
Hi @Bisho-Fouad , every Splunk system can have two kinds of users: local users: manually created on the system, users from the external authentication system: they depends on the Auth system. At least, every Splunk system has a local users usually called "admin", that's created at the installation moment. Did you tried using this admin account? It seems that in this system you have only the admin user and not other ones. Ciao. Giuseppe
I'm glad I was able to help! 
Hi Again @gcusello  1- which kind of troubles ? please check the attach 2- Have you your login page ? Yes, its appears normal 3- is your account a local or LDAP account? sorry but can you guide... See more...
Hi Again @gcusello  1- which kind of troubles ? please check the attach 2- Have you your login page ? Yes, its appears normal 3- is your account a local or LDAP account? sorry but can you guide me what is the difference as new user in Splunk thanks for your time
@gcusello  Same issue am facing as i had checked above solution worked on that it is working fine ,Till September  received(email notification ) the report for the alert triggered but it is stoppe... See more...
@gcusello  Same issue am facing as i had checked above solution worked on that it is working fine ,Till September  received(email notification ) the report for the alert triggered but it is stopped from October.  what could be the issue ??
Hi @Bisho-Fouad , which kind of troubles? Have you your login page? is your account a local or LDAP account? If local, you have only to remember the password, if LDAP, check the integration enter... See more...
Hi @Bisho-Fouad , which kind of troubles? Have you your login page? is your account a local or LDAP account? If local, you have only to remember the password, if LDAP, check the integration entering with a ocal account. Ciao. Giuseppe
Hi JohnEGones. I already have admin user, and is used it to login many times before, but I'm having trouble logging in with my admin account, even though I've added it to the cluster master twice an... See more...
Hi JohnEGones. I already have admin user, and is used it to login many times before, but I'm having trouble logging in with my admin account, even though I've added it to the cluster master twice and verified it with CLI commands. any recommendations ? 
Hi gcusello I already have admin user privileges, but I'm having trouble logging in with my admin account, even though I've added it to the cluster master twice and verified it with CLI commands.
my json file contains total 2 stages as below. per_stage_info_vendor_data: [ [-]      { [-]        Stage: stage1        WallClockTime: 0h:30m:23s      }      { [-]        Stage: stage2       ... See more...
my json file contains total 2 stages as below. per_stage_info_vendor_data: [ [-]      { [-]        Stage: stage1        WallClockTime: 0h:30m:23s      }      { [-]        Stage: stage2         WallClockTime: 0h:52m:36s      }       ]   with following regular expression we are able to get the hours mins and seconds. rex field=per_stage_info_vendor_data{}.WallClockTime max_match=0 "((?<hours>\d+)h:(?<minutes>\d+)m:(?<seconds>\d+)s)" But when i tried |eval  stagetime=hours*3600+minutes*60+seconds  it's not working, when i checked further any of the arithmetic  operation on these three fields(hours,minutes and seconds).   do i need to convert these fields to any other format.
Hi @nithys, at the ned of your search you can use the table command to define the order of fields in output, in your case: <your_search> | table field1 field2 field3 datanotfoundbynewway then you ... See more...
Hi @nithys, at the ned of your search you can use the table command to define the order of fields in output, in your case: <your_search> | table field1 field2 field3 datanotfoundbynewway then you are using three very similar searches as subsearches: this isn't very efficient because every subsearch takes a CPU. In you case you couls use something like this: (please adapt my approach to your requirement): index=dummyIndex source IN ("/dummy/Source")("support request details" OR "input params" OR "sqs sent count" OR "Total messages published to SQS successfully" OR "unique objectIds" OR "data not found for Ids" OR "dataNotFoundIds" OR "dataNotFoundIds") | rex "\"objectType\":\"(?<objectType>[^\"]+)" | rex "\"objectIdsCount\":\"(?<objectIdsCount>[^\"]+)" | rex "\"uniqObjectIdsCount\":\"(?<uniqObjectIdsCount>[^\"]+)" | rex "\"sqsSentCount\":\"(?<sqsSentCount>[^\"]+)" | rex "\"totalMessagesPublishedToSQS\":\"(?<totalMessagesPublishedToSQS>[^\"]+)" | spath output=payload path=dataNotFoundIds{} | rename dataNotFoundIds{}AS datanotfoundbynewway | stats values(objectType) AS objectType values(objectIdsCount) AS objectIdsCount values(sqsSentCount) AS sqsSentCount values(totalMessagesPublishedToSQS) AS totalMessagesPublishedToSQS values(uniqObjectIdsCount) AS uniqObjectIdsCount count AS Re-ProcessRequest values(datanotfoundbynewway) AS datanotfoundbynewway | addcoltotals labelfield=total label="Total" | table objectType objectIdsCount sqsSentCount totalMessagesPublishedToSQS uniqObjectIdsCount datanotfoundbynewway probably this search will not work as is, but see my approach. Ciao. Giuseppe
| eval lastmodifiedWeek=strftime(epoc_last_modified,"%Y-%V") |eval timeline="30-Oct-23" | eval timeline_date=strptime(timeline,"%d-%b-%y") |eval new_timeline=strftime(timeline_date,"%Y-%V") |wher... See more...
| eval lastmodifiedWeek=strftime(epoc_last_modified,"%Y-%V") |eval timeline="30-Oct-23" | eval timeline_date=strptime(timeline,"%d-%b-%y") |eval new_timeline=strftime(timeline_date,"%Y-%V") |where lastmodifiedWeek<=new_timeline |join max=0 type=left current_ticket_state [|inputlookup weekly_status_state_mapping.csv|rename Status as current_ticket_state|table current_ticket_state Lookup] | stats count by Lookup lastmodifiedWeek | eval timeline1 = strptime(lastmodifiedWeek." 1", "%Y-%U %w") | eval timeline2=relative_time(timeline1,"-1w@w1") | eval timeline = strftime(timeline2, "%Y-%m-%d") | table timeline , Lookup count |chart values(count) as count over timeline by Lookup |fillnull value=0 |tail 4 |reverse  
Thanks for your reply .  index=dummyIndex source IN ("/dummy/Source")"support request details" |stats count | rename count as Re-ProcessRequest | appendcols [ search index=dummyIndex source... See more...
Thanks for your reply .  index=dummyIndex source IN ("/dummy/Source")"support request details" |stats count | rename count as Re-ProcessRequest | appendcols [ search index=dummyIndex source IN ("/dummy/Source") "input params" OR "sqs sent count" OR "Total messages published to SQS successfully" OR "unique objectIds" OR "data not found for Ids" OR "dataNotFoundIds" | rex "\"objectType\":\"(?<objectType>[^\"]+)" | rex "\"objectIdsCount\":\"(?<objectIdsCount>[^\"]+)" | rex "\"uniqObjectIdsCount\":\"(?<uniqObjectIdsCount>[^\"]+)" | rex "\"sqsSentCount\":\"(?<sqsSentCount>[^\"]+)" | rex "\"totalMessagesPublishedToSQS\":\"(?<totalMessagesPublishedToSQS>[^\"]+)" | table objectType,objectIdsCount,sqsSentCount,totalMessagesPublishedToSQS,uniqObjectIdsCount | addcoltotals labelfield=total label="Total" | tail 1| stats list(*) as * ] | appendcols [ search index=dummyIndex source IN ("/dummy/source") "dataNotFoundIds" | spath output=payload path=dataNotFoundIds{} | spath input=_raw | stats count by payload | addcoltotals labelfield=total label="Total" | tail 1 | fields - payload,total | rename count as datanotfoundbynewway] While above is my query ,Let me first ask this: Can you please elaborate more on your statement "if this is your issue, use table at the end of your search listing fields in the wanted order." I am looking to modify the above query such a way the column "datanotfoundbynewway" should appear at last. Actual: It always displayed as a second column Expected : I wanted that column to appear as the last column .  # also how can i make use of stats in the above query instead of join  Thanks again!
Hi @Bisho-Fouad , you culd manage Cluster Manager and Deployment Server via CLI: it isn't so easy but it's possible! But anyway you need an admin user because every CLI command requires an authenti... See more...
Hi @Bisho-Fouad , you culd manage Cluster Manager and Deployment Server via CLI: it isn't so easy but it's possible! But anyway you need an admin user because every CLI command requires an authentication. Ciao. Giuseppe
Hi @tom_porter, using CIM you have two solutions: you could add all the fields to the CIM data Model (I don't like), you could try to normalize your data adding few fields and using calculated fie... See more...
Hi @tom_porter, using CIM you have two solutions: you could add all the fields to the CIM data Model (I don't like), you could try to normalize your data adding few fields and using calculated fields to insert the correct values. For example you could add some field to the CIM data Model (exe, comm, path, filename hostname) and then create some calculated fields: | eval exe=if(type=TYPE1, TYPE1.exe, TYPE2.exe), comm=if(type=TYPE1, TYPE1.comm, TYPE2.comm) then you can use thee fields in your searches using Data Model values. For more infos about normalization see: https://www.splunk.com/en_us/blog/learn/data-normalization.html?locale=en_us https://docs.splunk.com/Documentation/CIM/5.2.0/User/UsetheCIMtonormalizedataatsearchtime  Ciao. Giuseppe
Hi @leooooowang, as I said, you have to create a new dashboard in Simple XML that have the same panels and inputs of the Advanced XML dashboard. You have to create all the panels and inputs using t... See more...
Hi @leooooowang, as I said, you have to create a new dashboard in Simple XML that have the same panels and inputs of the Advanced XML dashboard. You have to create all the panels and inputs using the same searches of the old dashbord. Event Timeline can ve displayed using a chart or the timeline wiz (https://splunkbase.splunk.com/app/3120), instead fields summary should be a list of fields that you can insert in a table. Ciao. Giuseppe
Hi @cross521, your question id very vague. Anyway, in general you have to index data in Splunk to analyze and use them. The steps to do this are (in general) these: analyze data, finding the rel... See more...
Hi @cross521, your question id very vague. Anyway, in general you have to index data in Splunk to analyze and use them. The steps to do this are (in general) these: analyze data, finding the relevant ones (out of Splunk), ingest them using the Splunk features (for more infos see https://lantern.splunk.com/Splunk_Platform/Getting_Started/Getting_data_into_Enterprise), so you can search and use them. To save the search results in csv forma theres the outputcsv command (https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Outputcsv), but anyway you have to index data in Splunk. If you want to pre-process them, you have to use a script (done in the language you like) to prepare data before ingestion but I'am not an expert in scripting and this isn't a Splunk issue so I cannot help you. Ciao. Giuseppe
Hi, @gcusello  :   Yes, create those inputs fields by Simple XML is easy. But the hardest part is the UI  page of search result.    Our users want to keep the "event timeline" , and "fields summary... See more...
Hi, @gcusello  :   Yes, create those inputs fields by Simple XML is easy. But the hardest part is the UI  page of search result.    Our users want to keep the "event timeline" , and "fields summary" on the result page.    I can't find anyway to implement such UI function by Simple XML .     And the out-of-box  "search" page does not look like implemented by Simple XML,too...    
Hi @nithys , without a sample of your logs I cannot check yur regexes and I don't understand what't the key to correlate values because it seems that there isn't any common key to use in a stats com... See more...
Hi @nithys , without a sample of your logs I cannot check yur regexes and I don't understand what't the key to correlate values because it seems that there isn't any common key to use in a stats command. Ciao. Giuseppe
Hi @aditsss , did you tried something like this: index= "abc*" (sourcetype=600000304_gg_abs_ipc1 OR sourcetype=600000304_gg_abs_ipc2) "Message successfully sent to Cornerstone" source!="/var/log/me... See more...
Hi @aditsss , did you tried something like this: index= "abc*" (sourcetype=600000304_gg_abs_ipc1 OR sourcetype=600000304_gg_abs_ipc2) "Message successfully sent to Cornerstone" source!="/var/log/messages" ? Ciao. Giuseppe
Hi @nithys , Let meunderstand: your issue is the fields order at the end of your search? if this is your issue, use table at the end of your search listing fields in the wanted order. About the fi... See more...
Hi @nithys , Let meunderstand: your issue is the fields order at the end of your search? if this is your issue, use table at the end of your search listing fields in the wanted order. About the filter, you can add a search command after the objectType extraction. At least one hint: try to avoid to use join command: Splunk isn't a database and join command is very slow and resource consuming! in Community you can find many sampleas about replace join with stats. I could be more detailes. if you could share your search using the Insert/Edit Code Sample button (<>) because the search parameters aren't clear. Ciao. Giuseppe