All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There is a setting for roles in Splunk that configures what indexes are searched by default if an index is not specified in the search itself. This would be my guess is what is going on here if I und... See more...
There is a setting for roles in Splunk that configures what indexes are searched by default if an index is not specified in the search itself. This would be my guess is what is going on here if I understood your question correctly. The user's role that is utilizing the macro probably doesn't have the index set as a default searched index where the data resides. Here is a screenshot of the UI settings for roles default searched indexes.  
Hi @gcusello in each day there are lots of datapoints that’s why i need to break it down into the multiple index, probably it will increase speed of search like sharing mechanism in elasticsearch or ... See more...
Hi @gcusello in each day there are lots of datapoints that’s why i need to break it down into the multiple index, probably it will increase speed of search like sharing mechanism in elasticsearch or influxdb.
Hello, My local Splunk IP address is 127.0.0.1:514. I enabled remote logging  on my endpoint and entered the above address to my endpoint (sys log ) logging remote log server address/ but I'm not re... See more...
Hello, My local Splunk IP address is 127.0.0.1:514. I enabled remote logging  on my endpoint and entered the above address to my endpoint (sys log ) logging remote log server address/ but I'm not receiving the logs from endpoint to the Splunk, any advice? please. Thanks
Only exception to don’t edit files inside default folder is, when you have created your own app. Then you must edit default files before deploying it to splunk. But then with gui you edit again only ... See more...
Only exception to don’t edit files inside default folder is, when you have created your own app. Then you must edit default files before deploying it to splunk. But then with gui you edit again only local files. 
Thanks for the reply! Indeed, I have already converted the file into the metrics CSV format of a separate row per timestamp per metric. That works and I can ingest the data. However, it increases ... See more...
Thanks for the reply! Indeed, I have already converted the file into the metrics CSV format of a separate row per timestamp per metric. That works and I can ingest the data. However, it increases the file size from 92MB to 1.4GB so very VERY wasteful to be sure. I will work the problem some more and see what I can do.
Hi Community People. Our team has stood up a new instance of Splunk, and we have deployed out some cool new apps. One issue I have run into however is that there seems to be a weirdness in how the a... See more...
Hi Community People. Our team has stood up a new instance of Splunk, and we have deployed out some cool new apps. One issue I have run into however is that there seems to be a weirdness in how the app is expecting the data. Specifically, the predefined queries (some using macros) seem to not work, unless there is an index specified. Is there an explanation behind this?           sourcetype=[some preconfigured type from the app] | stats count by someField <===doesn't seem to work index=someIndex sourcetype=appDefinedSourceType | stats count by someField <===this works          
The number of rows is not an issue and Splunk regularly handles files much larger than 92MB. The TRUNCATE setting applies to events, not headers. Since you're using Python now, consider a scripted ... See more...
The number of rows is not an issue and Splunk regularly handles files much larger than 92MB. The TRUNCATE setting applies to events, not headers. Since you're using Python now, consider a scripted input to read the file and convert it to k=v format.
Thank you, that makes total sense!
The timechart command will produce zero results that pad the X-axis and should give the desired results. <chart depends="$show_chart_terminations$,$timeEarliest$,$timeLatest$"> <search> <query... See more...
The timechart command will produce zero results that pad the X-axis and should give the desired results. <chart depends="$show_chart_terminations$,$timeEarliest$,$timeLatest$"> <search> <query>... | timechart count BY some_field</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="charting.axisX.minimumNumber">$timeEarliest$</option> <option name="charting.axisX.maximumNumber">$timeLatest$</option> <option name="charting.chart">column</option> </chart>  
    Hi, I have a dashboard with time picker and a dummy search to transform relative timestamps to absolute timestamps:   <search> <query>| makeresults</query> <earliest>$time.earliest$</ear... See more...
    Hi, I have a dashboard with time picker and a dummy search to transform relative timestamps to absolute timestamps:   <search> <query>| makeresults</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <progress> <eval token="timeEarliest">strptime($job.earliestTime$,"%Y-%m-%dT%H:%M:%S.%3N%z")</eval> <eval token="timeLatest">strptime($job.latestTime$,"%Y-%m-%dT%H:%M:%S.%3N%z")</eval> </progress> </search>   Next, I have a chart querying something using the timepicker from the form. Per default, the chart will automatically adjust the X-Axis to the results found, not the entire searched timespan. I want to change this behavior and tried setting chart.axisX to the absolute timestamp values, but it doesn't seem to work. Is there something that I am missing?   <chart depends="$timeEarliest$,$timeLatest$"> <search> <query>... | chart count OVER _time BY some_field</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="charting.axisX.minimumNumber">$timeEarliest$</option> <option name="charting.axisX.maximumNumber">$timeLatest$</option> <option name="charting.chart">column</option> </chart>          
Hi @indeed_2000, avoid to create an index for each day, Splunk isn't a database and an index isn't a table: you should create a different index when you have different retentions or different access... See more...
Hi @indeed_2000, avoid to create an index for each day, Splunk isn't a database and an index isn't a table: you should create a different index when you have different retentions or different access grants. You can search in an index using the timestamp even if it's only one index, you don't need to have a different index by day! The update frequency of a summary index depends on the scheduled search you are using, as the name itself says, it's scheduled, so you have to give a schedule frequency, that can also be very frequent, depending on the execution time of the search itself: so e.g. the scheduled runs in 30 seconds, you can schedule it every minute, but I don't hint to run too frequently, becsuer you could have skipped searches. Also running a search in Real Time it's possible but it requires many resources, so avoid it. Ciao. Giuseppe 
Need to create summary index continuously realtime, now have two questions: 1-run splunk forwarder on client and logs send to splunk server, in each lines lots of data exist so need to create the su... See more...
Need to create summary index continuously realtime, now have two questions: 1-run splunk forwarder on client and logs send to splunk server, in each lines lots of data exist so need to create the summary index as soon as log received and store summary of line on that summary index continuously realtime.   2-is it possible Automatically Create new index for each day like this myindex-20240115, myindex-20240116, as data comings from forwarder?    Thanks
Datetime calculations such as finding the difference should be done with epoch times so rather than formatting now() you should be parsing timestampOfReception using strptime() so you can subtract on... See more...
Datetime calculations such as finding the difference should be done with epoch times so rather than formatting now() you should be parsing timestampOfReception using strptime() so you can subtract one from the other.
| tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sou... See more...
| tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic |search NOT (All_Traffic.src_ip [|inputlookup internal_ranges.csv ])    Thank you so much!! Your second option yielded the correct results. Do you mind helping me with the other portion please? also, if you know where I can find documentation that can better help me understand lookups for these kinds of searches, that would be appreciated. I am new with Splunk and as much as I like it, I find it challenging at times. 
Hi Team i had provided user roles  has Read only access. but user having edit and delete the reports, how to restrict  the user access as Read only access for the reports, The below had provided ... See more...
Hi Team i had provided user roles  has Read only access. but user having edit and delete the reports, how to restrict  the user access as Read only access for the reports, The below had provided  configuration and Roles capablities  had given below.   Please help me  how to restrict the user  access.  User not able to delete option  and not able edit Splunk Queries.     [savedsearches/krack_delete] access = read : [ * ], write : [ power ] export = system owner = vijreddy@xxxxxxxxxxxxx.com version = 9.0.5.1 modtime = 1704823240.999623300    
Hi,  I have a dataset with very poor qulity and multiple encoding error. Some fields contain data like "&#1040;&#1083;&#1077;&#1082;&#1089;&#1077;&#1081;" which sould be "Алексей". My first idea to ... See more...
Hi,  I have a dataset with very poor qulity and multiple encoding error. Some fields contain data like "&#1040;&#1083;&#1077;&#1082;&#1089;&#1077;&#1081;" which sould be "Алексей". My first idea to convert taht, was to search every falty dataset and convert this extermally with a script but I'm curious if theres a better way using splunk. But I have no idea how to get there. I somehow need to cet every &#(\d{4}); and I could facilitate printf("%c", \1) to get the correct unicode character but I have no Idea how to apply that to every occourance in a single field. Currently I have data like this: id name 1 &#1040;&#1083;&#1077;&#1082;&#1089;&#1077;&#1081;   Where I wanno get is that: id name correct_name 1 &#1040;&#1083;&#1077;&#1082;&#1089;&#1077;&#1081; Алексей   Any ideas if that is possible without using python sripts in splunk? Regards Thorsten
Hello, I am looking for any guidance, info about the possibility of using Microsoft AMA agents to forward logs to splunk instead of using Splunk universal forwarders. I know you will say but why?! ... See more...
Hello, I am looking for any guidance, info about the possibility of using Microsoft AMA agents to forward logs to splunk instead of using Splunk universal forwarders. I know you will say but why?! lets say I have some requirements and constraints that oblige me to use AMA agents  I need to know the feasibality of this integration and if there are any known issues or limitations. Thanks you for your help. (excuse me if my question is vague, i am kinda lost here  )
Hi  Can you please tell me how can i  extract the events for which the difference of current_time and timestampOfReception is greater that 4 hours for the below Splunk query :    `eoc_stp_event... See more...
Hi  Can you please tell me how can i  extract the events for which the difference of current_time and timestampOfReception is greater that 4 hours for the below Splunk query :    `eoc_stp_events_indexes` host=p* OR host=azure_srt_prd_0001 (messageType= seev.047* OR messageType= SEEV.047*) status = SUCCESS targetPlatform = SRS_ESES NOT [ search (index=events_prod_srt_shareholders_esa OR index=eoc_srt) seev.047 Name="Received Disclosure Response Command" | spath input=Properties.appHdr | rename bizMsgIdr as messageBusinessIdentifier | fields messageBusinessIdentifier ] | eval Current_time =strftime(now(),"%Y-%m-%d %H:%M:%S ") | eval diff= Current_time-timestampOfReception | fillnull timestampOfReception , messageOriginIdentifier, messageBusinessIdentifier, direction, messageType, currentPlatform, sAAUserReference value="-" | sort -timestampOfReception | table diff , Current_time, timestampOfReception, messageOriginIdentifier, messageType, status, messageBusinessIdentifier, originPlatform, direction, sourcePlatform, currentPlatform, targetPlatform, senderIdentifier, receiverIdentifier, currentPlatform, | rename timestampOfReception AS "Timestamp of reception", originPlatform AS "Origin platform", sourcePlatform AS "Source platform", targetPlatform AS "Target platform", senderIdentifier AS "Sender identifier", receiverIdentifier AS "Receiver identifier", messageOriginIdentifier AS "Origin identifier", messageBusinessIdentifier AS "Business identifier", direction AS Direction, currentPlatform AS "Current platform", sAAUserReference AS "SAA user reference", messageType AS "Message type"
Maybe.  All knowledge objects in the ESCU app will be disabled so any app (including ES) that tries to use them likely will fail.
The easier way to mask data is with SEDCMD in props.conf. SEDCMD-emailaddr-anonymizer = s/([A-z0-9\._%+-]+@[A-z0-9\.-]+\.[A-z]{2,63})/********@*********/g