All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Zhangyy  This should give you approx 25km in each direction as you've explaine: | where lat>=35.5 AND lat<=36.0 AND lon>=139.5 AND lon<=140.0  Did this answer help you? If so, please conside... See more...
Hi @Zhangyy  This should give you approx 25km in each direction as you've explaine: | where lat>=35.5 AND lat<=36.0 AND lon>=139.5 AND lon<=140.0  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid, I'm still under to get the fields listed even updating the props.conf. [preprocess_case] TRANSFORMS-setsourcetype = sourcetype_router, sourcetype_router2 SHOULD_LINEMERGE=false LI... See more...
Hi @livehybrid, I'm still under to get the fields listed even updating the props.conf. [preprocess_case] TRANSFORMS-setsourcetype = sourcetype_router, sourcetype_router2 SHOULD_LINEMERGE=false LINE_BREAKER=(\[)|(([\r\n]+)\s*{(?=\s*"attribute":\s*{))|(\]) TRUNCATE=100000 TIME_PREFIX="ClosedDate":\s*" [too_small] PREFIX_SOURCETYPE = false  
Hi @LearningGuy  If you are wanting to collect the data into a "summary" index then you do not have to use the method which appends the "summaryindex" command if this doesnt do what you need it to d... See more...
Hi @LearningGuy  If you are wanting to collect the data into a "summary" index then you do not have to use the method which appends the "summaryindex" command if this doesnt do what you need it to do. Instead just create your search as you did with the collect command (with output mode to HEC) and then schedule the report to run at the relevant interval.  Check out Manually configure a report to populate a summary index in the summary indexing docs.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Zhangyy , try without dots and use quotes. Ciao. Giuseppe
Hi @rahulhari88 , my hint is from the Splunk Cluster Administration Course, probably it's ok also in your way: try it. Ciao. Giuseppe
Hi @ws  The reason you arent getting the fields listed is because it isnt being parsed as valid JSON. To remove the trailing "]" try the following LINE_BREAKER LINE_BREAKER=(\[)|(([\r\n]+)\s*{(?=\... See more...
Hi @ws  The reason you arent getting the fields listed is because it isnt being parsed as valid JSON. To remove the trailing "]" try the following LINE_BREAKER LINE_BREAKER=(\[)|(([\r\n]+)\s*{(?=\s*"attribute":\s*{))|(\])  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi team, I have a question related to Splunk SOAR. I'm working on a new community app that will include an on-poll action. This action will ingest a large number of events into SOAR. I came across a... See more...
Hi team, I have a question related to Splunk SOAR. I'm working on a new community app that will include an on-poll action. This action will ingest a large number of events into SOAR. I came across a document that mentions a few limits, including that 61k events were tested. I just wanted to check if anyone knows what configuration was used for that test? (For example, what environment or specs were in place when they tested the 61k ingestion?)
Hi @livehybrid, i tried the following method to write into the local file with keeping the file at /tmp but it still didn't work. As for my situation, i think the best scenario would be keep a recor... See more...
Hi @livehybrid, i tried the following method to write into the local file with keeping the file at /tmp but it still didn't work. As for my situation, i think the best scenario would be keep a record of something like "seen before record.txt" and do a comparison and only to write new records into the file and remove previous indexed entries. At least the current approach is workable, but we’ll need to monitor the file size of "seen before record.txt" as it continues to grow. For now, the file size isn’t a concern since it only stores a limited number of tracking records.
Hi, Unsure what is the root cause as i was trying to do some minor adjustment to ignore the [ ] at the transforms.conf. Previously I'm able to view the fields like Id Name and their value but curre... See more...
Hi, Unsure what is the root cause as i was trying to do some minor adjustment to ignore the [ ] at the transforms.conf. Previously I'm able to view the fields like Id Name and their value but currently nothing shows. I tried to re-do the props.conf, transforms.conf and inputs.conf by adding parameter by parameter and it still didn't work.    
Thank you all for your timely reply. Sorry, it might be difficult to understand because the scope was not specified specifically May I ask how to write SPL within the following range? Latitude: Fro... See more...
Thank you all for your timely reply. Sorry, it might be difficult to understand because the scope was not specified specifically May I ask how to write SPL within the following range? Latitude: From 35.5 degrees north latitude to 36.0 degrees north latitude Longitude: From 139.5 degrees east longitude to 140.0 degrees   I wrote some content but an error occurred index=xxxxx | table FROM_IP | iplocation FROM_IP | where latitude >= 35.5 AND latitude <= 36.0 | where longitude >= 139.5 AND longitude <= 140 Thanks
but it says we can @gcusello  use https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Sitereplicationfactor   
Hi @livehybrid , @isoutamo  Thanks a lot for your help! I’ll try the configuration with CLONE_SOURCETYPE and will come back here to let you know if it works for me. Thanks again for your suppo... See more...
Hi @livehybrid , @isoutamo  Thanks a lot for your help! I’ll try the configuration with CLONE_SOURCETYPE and will come back here to let you know if it works for me. Thanks again for your support!  
Hi @rahulhari88 , you can use only origin and total: [general] site = site_DC [clustering] mode = manager manager_switchover_mode = auto/manual manager_uri = clustermanager:cm1,clustermanager:cm2 ... See more...
Hi @rahulhari88 , you can use only origin and total: [general] site = site_DC [clustering] mode = manager manager_switchover_mode = auto/manual manager_uri = clustermanager:cm1,clustermanager:cm2 multisite = true available_sites = site_DC, site_DR site_replication_factor = origin:2, total:4 site_search_factor = origin:2, total:3 replication_factor = 2 pass4SymmKey = <redacted> cluster_label = abc_idxcluster [clustermanager:cm1] manager_uri = https://CM1:8089 [clustermanager:cm2] manager_uri = https://CM2:8089 Ciao. Giuseppe
Hi @Zhangyy  You could try the following: | iplocation yourIPField | where abs(lon - 0.89) <= (100/111) AND abs(lat - 0.91) <= (100/111) This checks if a point with coordinates (lon, lat) is with... See more...
Hi @Zhangyy  You could try the following: | iplocation yourIPField | where abs(lon - 0.89) <= (100/111) AND abs(lat - 0.91) <= (100/111) This checks if a point with coordinates (lon, lat) is within a rectangular area centered at (0.89, 0.91) with a "radius" of approximately 0.9009 degrees in both the longitude and latitude directions. This rectangle is approximately 100 kilometers wide and 100 kilometers tall, assuming a rough conversion of 1 degree of latitude to 111 kilometers.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @Mridu27 ,  in addition to the solutions from @livehybrid  and @kiran_panchavat , you could simply change the password of this user, so the user will be active but practically disabled! Ciao. G... See more...
Hi @Mridu27 ,  in addition to the solutions from @livehybrid  and @kiran_panchavat , you could simply change the password of this user, so the user will be active but practically disabled! Ciao. Giuseppe
Hi @Cheng2Ready , if you have a lookup containing all the holidays, it's easier to use it as subsearch in the main search, something like this: index=xxx <xxxxxxx> NOT (date_wday="saturday" OR date... See more...
Hi @Cheng2Ready , if you have a lookup containing all the holidays, it's easier to use it as subsearch in the main search, something like this: index=xxx <xxxxxxx> NOT (date_wday="saturday" OR date_wday="sunday") OR [ | inputlookup holidays.csv | eval date_year=strftime(HolidayDate,"%Y"), date_month=strftime(HolidayDate,"%m"), date_mday=strftime(HolidayDate,"%d") | fields date_year date_month date_mday ] if you want, in the same way, you could also add a rule for the out of office time (e.g. 18-9). Ciao. Giuseppe
Thinking of using this as config , thinking of have RF -4 and SF-3  [general] site = site_DC [clustering] mode = manager manager_switchover_mode = auto/manual manager_uri = clustermanager:cm1,clust... See more...
Thinking of using this as config , thinking of have RF -4 and SF-3  [general] site = site_DC [clustering] mode = manager manager_switchover_mode = auto/manual manager_uri = clustermanager:cm1,clustermanager:cm2 multisite = true available_sites = site_DC, site_DR site_replication_factor = origin:2, site_DC:2, site_DR:2, total:4 site_search_factor = origin:2, site_DC:2, site_DR:1, total:3 replication_factor = 2 pass4SymmKey = <redacted> cluster_label = abc_idxcluster [clustermanager:cm1] manager_uri = https://CM1:8089 [clustermanager:cm2] manager_uri = https://CM2:8089  
Hi @hazardoom , as @livehybrid said, you cannot move savedsearches in local. My hint is to manually copy all of them, it's really difficoult to upload a custom app and then delete objects when you ... See more...
Hi @hazardoom , as @livehybrid said, you cannot move savedsearches in local. My hint is to manually copy all of them, it's really difficoult to upload a custom app and then delete objects when you need! Ciao. Giuseppe
Hi @rahulhari88 , I don't like to have only on e copy od data for each seite because in tjhis way you need to access both the sites when one Indexert is down. Anyway, you have to configure in $SPLU... See more...
Hi @rahulhari88 , I don't like to have only on e copy od data for each seite because in tjhis way you need to access both the sites when one Indexert is down. Anyway, you have to configure in $SPLUNK_HOME/etc/system/local/server.conf of your Cluster Manager: [clustering] multisite = true mode = master available_sites = site1,site2 site_replication_factor = origin:1,total:2 site_search_factor = origin:1,total:2 pass4SymmKey = <your_password> or using CLI: /opt/splunk/bin/splunk edit cluster-config -mode master -multisite true -site site1 -available_sites = site1,site2 -site_replication_factor origin:1,total:2 -site_search_factor origin:1,total:2 -secret <your_password> Put attention to the Search Affinity: if you use this option, you reduce the traffic in your network between sites, when one site is down you must use the Search Head of the live site, otherwise you don't see all the data. Ciao. Giuseppe
Not entirely sure what you're trying to do, but this is a macro for the haversine formula eval hv_rlat1 = pi()*$dest_lat$/180, hv_rlat2=pi()*$source_lat$/180, hv_rlat = pi()*($source_lat$-$dest_lat$... See more...
Not entirely sure what you're trying to do, but this is a macro for the haversine formula eval hv_rlat1 = pi()*$dest_lat$/180, hv_rlat2=pi()*$source_lat$/180, hv_rlat = pi()*($source_lat$-$dest_lat$)/180, hv_rlon= pi()*($source_lon$-$dest_lon$)/180 | eval hv_a = sin(hv_rlat/2) * sin(hv_rlat/2) + cos(hv_rlat1) * cos(hv_rlat2) * sin(hv_rlon/2) * sin(hv_rlon/2) | eval hv_c = 2 * atan2(sqrt(hv_a), sqrt(1-hv_a)) | eval distance = round(6371 * hv_c * 1000,0) | fields - hv_rlat, hv_rlat1, hv_rlon, hv_rlon1, hv_a, hv_c Set it up to take 4 parameters and these are the named params source_lat, source_lon, dest_lat, dest_lon Then you can just use  `haversine(a_lat, a_lon, b_lat, b_lon)` to get the distance between two points